920 resultados para distributional equivalence
Resumo:
The Sardinian brook salamander, Euproctus platycephalus, is a cryptically coloured urodele found in streams, springs and pools in the main mountain systems of Sardinia, and is classified as critically endangered by IUCN. General reviews of the mountainous range where salamanders occur are numerous, but very few field-based distribution studies exist on this endemic species. Through a field and questionnaire survey, conducted between 1999 and 2001, we report a first attempt to increase data on the present distribution of E. platycephalus. A total of 14 localities where Sardinian salamanders are represented by apparently stable and in some cases abundant populations have been identified, as well as 30 sites where species presence has been recorded after 1991. Some 11 historical sites were identified which are no longer inhabited by the species. The implications of this distributional study for the conservation of the species and for the realization of an updated atlas are discussed.
Resumo:
During fatigue tests of cortical bone specimens, at the unload portion of the cycle (zero stress) non-zero strains occur and progressively accumulate as the test progresses. This non-zero strain is hypothesised to be mostly, if not entirely, describable as creep. This work examines the rate of accumulation of this strain and quantifies its stress dependency. A published relationship determined from creep tests of cortical bone (Journal of Biomechanics 21 (1988) 623) is combined with knowledge of the stress history during fatigue testing to derive an expression for the amount of creep strain in fatigue tests. Fatigue tests on 31 bone samples from four individuals showed strong correlations between creep strain rate and both stress and “normalised stress” (σ/E) during tensile fatigue testing (0–T). Combined results were good (r2=0.78) and differences between the various individuals, in particular, vanished when effects were examined against normalised stress values. Constants of the regression showed equivalence to constants derived in creep tests. The universality of the results, with respect to four different individuals of both sexes, shows great promise for use in computational models of fatigue in bone structures.
Resumo:
The research uses a sociological perspective to build an improved, context specific understanding of innovation diffusion within the UK construction industry. It is argued there is an iterative interplay between actors and the social system they occupy that directly influences the diffusion process as well as the methodology adopted. The research builds upon previous findings that argued a level of best fit for the three innovation diffusion concepts of cohesion, structural equivalence and thresholds. That level of best fit is analysed here using empirical data from the UK construction industry. This analysis allows an understanding of how the relative importance of these concepts' actually varies within the stages of the innovation diffusion process. The conclusion that the level of relevance fluctuates in relation to the stages of the diffusion process is a new development in the field.
Resumo:
The UK Construction Industry has been criticized for being slow to change and adopt innovations. The idiosyncrasies of participants, their roles in a social system and the contextual differences between sections of the UK Construction Industry are viewed as being paramount to explaining innovation diffusion within this context. Three innovation diffusion theories from outside construction management literature are introduced, Cohesion, Structural Equivalence and Thresholds. The relevance of each theory, in relation to the UK Construction Industry, is critically reviewed using literature and empirical data. Analysis of the data results in an explanatory framework being proposed. The framework introduces a Personal Awareness Threshold concept, highlights the dominant role of Cohesion through the main stages of diffusion, together with the use of Structural Equivalence during the later stages of diffusion and the importance of Adoption Threshold levels.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
External interferences can severely degrade the performance of an Over-the-horizon radar (OTHR), so suppression of external interferences in strong clutter environment is the prerequisite for the target detection. The traditional suppression solutions usually began with clutter suppression in either time or frequency domain, followed by the interference detection and suppression. Based on this traditional solution, this paper proposes a method characterized by joint clutter suppression and interference detection: by analyzing eigenvalues in a short-time moving window centered at different time position, Clutter is suppressed by discarding the maximum three eigenvalues at every time position and meanwhile detection is achieved by analyzing the remained eigenvalues at different position. Then, restoration is achieved by forward-backward linear prediction using interference-free data surrounding the interference position. In the numeric computation, the eigenvalue decomposition (EVD) is replaced by values decomposition (SVD) based on the equivalence of these two processing. Data processing and experimental results show its efficiency of noise floor falling down about 10-20 dB.
Resumo:
Background The best documented survival responses of organisms to past climate change on short (glacial-interglacial) timescales are distributional shifts. Despite ample evidence on such timescales for local adaptations of populations at specific sites, the long-term impacts of such changes on evolutionary significant units in response to past climatic change have been little documented. Here we use phylogenies to reconstruct changes in distribution and flowering ecology of the Cape flora - South Africa's biodiversity hotspot - through a period of past (Neogene and Quaternary) changes in the seasonality of rainfall over a timescale of several million years. Results Forty-three distributional and phenological shifts consistent with past climatic change occur across the flora, and a comparable number of clades underwent adaptive changes in their flowering phenology (9 clades; half of the clades investigated) as underwent distributional shifts (12 clades; two thirds of the clades investigated). Of extant Cape angiosperm species, 14-41% have been contributed by lineages that show distributional shifts consistent with past climate change, yet a similar proportion (14-55%) arose from lineages that shifted flowering phenology. Conclusions Adaptive changes in ecology at the scale we uncover in the Cape and consistent with past climatic change have not been documented for other floras. Shifts in climate tolerance appear to have been more important in this flora than is currently appreciated, and lineages that underwent such shifts went on to contribute a high proportion of the flora's extant species diversity. That shifts in phenology, on an evolutionary timescale and on such a scale, have not yet been detected for other floras is likely a result of the method used; shifts in flowering phenology cannot be detected in the fossil record.
Resumo:
Statistical graphics are a fundamental, yet often overlooked, set of components in the repertoire of data analytic tools. Graphs are quick and efficient, yet simple instruments of preliminary exploration of a dataset to understand its structure and to provide insight into influential aspects of inference such as departures from assumptions and latent patterns. In this paper, we present and assess a graphical device for choosing a method for estimating population size in capture-recapture studies of closed populations. The basic concept is derived from a homogeneous Poisson distribution where the ratios of neighboring Poisson probabilities multiplied by the value of the larger neighbor count are constant. This property extends to the zero-truncated Poisson distribution which is of fundamental importance in capture–recapture studies. In practice however, this distributional property is often violated. The graphical device developed here, the ratio plot, can be used for assessing specific departures from a Poisson distribution. For example, simple contaminations of an otherwise homogeneous Poisson model can be easily detected and a robust estimator for the population size can be suggested. Several robust estimators are developed and a simulation study is provided to give some guidance on which should be used in practice. More systematic departures can also easily be detected using the ratio plot. In this paper, the focus is on Gamma mixtures of the Poisson distribution which leads to a linear pattern (called structured heterogeneity) in the ratio plot. More generally, the paper shows that the ratio plot is monotone for arbitrary mixtures of power series densities.
Resumo:
This paper investigates the psychometric properties of Vigneron and Johnson's Brand Luxury Index scale. The authors developed the scale using data collected from a student sample in Australia. To validate the scale, the study reported in this paper uses data collected from Taiwanese luxury consumers. The scale was initially subjected to reliability analysis yielding low α values for two of its five proposed dimensions. Exploratory and confirmatory factors analyses were subsequently performed to examine the dimensionality of brand luxury. Discriminant and convergent validity tests highlight the need for further research into the dimensionality of the construct. Although the scale represents a good initial contribution to understanding brand luxury, in view of consumers' emerging shopping patterns, further investigation is warranted to establish the psychometric properties of the scale and its equivalence across cultures.
Resumo:
The reform of previously state-owned and operated industries in many Less Developed Countries (LDCs) provide contrary experiences to those in the developed world, which have generally had more equitable distributional impacts. The economic reform policies proposed by the so-called 'Washington Consensus' state that privatisation provides governments with opportunities to raise revenues through the sale of under-performing and indebted state industries, thereby reducing significant fiscal burdens, and, at the same time, facilitating influxes of foreign capital, skills and technology, with the aim of improving operations and a "trickle-down" of benefits. However, experiences in many LDCs over the last 15-20 years suggest that reform has not solved the problem of chronic public-sector debt, and that poverty and socio-economic inequalities have increased during this period of 'neo-liberal' economics. This paper does not seek to challenge the policies themselves, but rather argues that the context in which reform has often taken place is of fundamental significance. The industry-centric policy advice provided by the IFIs typically causes a 'lock-in' of inequitably distributed 'efficiency gains', providing minimal, if any, benefits to impoverished groups. These arguments are made using case study analysis from the electricity and mining sectors.
Resumo:
In two recent papers Byrne and Lee (2006, 2007) examined the geographical concentration of institutional office and retail investment in England and Wales at two points in time; 1998 and 2003. The findings indicate that commercial office portfolios are concentrated in a very few urban areas, whereas retail holdings correlate more closely with the urban hierarchy of England and Wales and consequently are essentially ubiquitous. Research into the industrial sector is very much less developed, and this paper therefore makes a significant contribution to understanding the structure of industrial property investment in the UK. It shows that industrial investment concentration is between that of retail and office and is focussed on LAs with high levels of manual workers in areas with smaller industrial units. It also shows that during the period studied the structure of the sector changed, with greater emphasis on the distributional element, for which location is a principal consideration.
Resumo:
We consider the general response theory recently proposed by Ruelle for describing the impact of small perturbations to the non-equilibrium steady states resulting from Axiom A dynamical systems. We show that the causality of the response functions entails the possibility of writing a set of Kramers-Kronig (K-K) relations for the corresponding susceptibilities at all orders of nonlinearity. Nonetheless, only a special class of directly observable susceptibilities obey K-K relations. Specific results are provided for the case of arbitrary order harmonic response, which allows for a very comprehensive K-K analysis and the establishment of sum rules connecting the asymptotic behavior of the harmonic generation susceptibility to the short-time response of the perturbed system. These results set in a more general theoretical framework previous findings obtained for optical systems and simple mechanical models, and shed light on the very general impact of considering the principle of causality for testing self-consistency: the described dispersion relations constitute unavoidable benchmarks that any experimental and model generated dataset must obey. The theory exposed in the present paper is dual to the time-dependent theory of perturbations to equilibrium states and to non-equilibrium steady states, and has in principle similar range of applicability and limitations. In order to connect the equilibrium and the non equilibrium steady state case, we show how to rewrite the classical response theory by Kubo so that response functions formally identical to those proposed by Ruelle, apart from the measure involved in the phase space integration, are obtained. These results, taking into account the chaotic hypothesis by Gallavotti and Cohen, might be relevant in several fields, including climate research. In particular, whereas the fluctuation-dissipation theorem does not work for non-equilibrium systems, because of the non-equivalence between internal and external fluctuations, K-K relations might be robust tools for the definition of a self-consistent theory of climate change.
Resumo:
Practical applications of portfolio optimisation tend to proceed on a “top down” basis where funds are allocated first at asset class level (between, say, bonds, cash, equities and real estate) and then, progressively, at sub-class level (within property to sectors, office, retail, industrial for example). While there are organisational benefits from such an approach, it can potentially lead to sub-optimal allocations when compared to a “global” or “side-by-side” optimisation. This will occur where there are correlations between sub-classes across the asset divide that are masked in aggregation – between, for instance, City offices and the performance of financial services stocks. This paper explores such sub-class linkages using UK monthly stock and property data. Exploratory analysis using clustering procedures and factor analysis suggests that property performance and equity performance are distinctive: there is little persuasive evidence of contemporaneous or lagged sub-class linkages. Formal tests of the equivalence of optimised portfolios using top-down and global approaches failed to demonstrate significant differences, whether or not allocations were constrained. While the results may be a function of measurement of market returns, it is those returns that are used to assess fund performance. Accordingly, the treatment of real estate as a distinct asset class with diversification potential seems justified.
Resumo:
This paper surveys numerical techniques for the regularization of descriptor (generalized state-space) systems by proportional and derivative feedback. We review generalizations of controllability and observability to descriptor systems along with definitions of regularity and index in terms of the Weierstraß canonical form. Three condensed forms display the controllability and observability properties of a descriptor system. The condensed forms are obtained through orthogonal equivalence transformations and rank decisions, so they may be computed by numerically stable algorithms. In addition, the condensed forms display whether a descriptor system is regularizable, i.e., when the system pencil can be made to be regular by derivative and/or proportional output feedback, and, if so, what index can be achieved. Also included is a a new characterization of descriptor systems that can be made to be regular with index 1 by proportional and derivative output feedback.
Resumo:
The incorporation of ekphrastic evocations of photographs into fictional works is a growing trend charted by (mostly) literary and (occasionally) art critics interested in the effect of their inclusion in a narrative. What has emerged as a veritable affinity of photography with literature has produced a fertile interdisciplinary critical discourse around areas of intersection between visual and verbal. With regard to short fiction, the photograph is often subject to investigation as analogy, the photograph and the short story being considered metonymically related with regard to form and effect. This notion of a structural equivalence between short story and photograph is one stressed by author/photographer Julio Cortàzar, concerned to highlight the quality of intensity he ascribes to both forms, which he saw as ‘cutting out a piece of reality’ in order to ‘breaking out’ into a wider one. Given Annie Saumont’s oft-cited admiration of Cortàzar’s work it is unsurprising that in her own writing – of stories themselves often classed, in their elliptical density, as verbal snapshots – she should take an interest in photographs and/or photographers. This article seeks to explore and analyse different values Saumont ascribes to what was paradoxically described by Barthes as ‘invisible’, in that what we see when viewing a photograph is, (often treacherously), ‘ pas elle qu’on voit’: never, or never solely, the actual object itself …