44 resultados para METRICS
Resumo:
Our essay aims at studying suitable statistical methods for the clustering ofcompositional data in situations where observations are constituted by trajectories ofcompositional data, that is, by sequences of composition measurements along a domain.Observed trajectories are known as “functional data” and several methods have beenproposed for their analysis.In particular, methods for clustering functional data, known as Functional ClusterAnalysis (FCA), have been applied by practitioners and scientists in many fields. To ourknowledge, FCA techniques have not been extended to cope with the problem ofclustering compositional data trajectories. In order to extend FCA techniques to theanalysis of compositional data, FCA clustering techniques have to be adapted by using asuitable compositional algebra.The present work centres on the following question: given a sample of compositionaldata trajectories, how can we formulate a segmentation procedure giving homogeneousclasses? To address this problem we follow the steps described below.First of all we adapt the well-known spline smoothing techniques in order to cope withthe smoothing of compositional data trajectories. In fact, an observed curve can bethought of as the sum of a smooth part plus some noise due to measurement errors.Spline smoothing techniques are used to isolate the smooth part of the trajectory:clustering algorithms are then applied to these smooth curves.The second step consists in building suitable metrics for measuring the dissimilaritybetween trajectories: we propose a metric that accounts for difference in both shape andlevel, and a metric accounting for differences in shape only.A simulation study is performed in order to evaluate the proposed methodologies, usingboth hierarchical and partitional clustering algorithm. The quality of the obtained resultsis assessed by means of several indices
Resumo:
When one wishes to implement public policies, there is a previous need of comparing different actions and valuating and evaluating them to assess their social attractiveness. Recently the concept of well-being has been proposed as a multidimensional proxy for measuring societal prosperity and progress; a key research topic is then on how we can measure and evaluate this plurality of dimensions for policy decisions. This paper defends the thesis articulated in the following points: 1. Different metrics are linked to different objectives and values. To use only one measurement unit (on the grounds of the so-called commensurability principle) for incorporating a plurality of dimensions, objectives and values, implies reductionism necessarily. 2. Point 1) can be proven as a matter of formal logic by drawing on the work of Geach about moral philosophy. This theoretical demonstration is an original contribution of this article. Here the distinction between predicative and attributive adjectives is formalised and definitions are provided. Predicative adjectives are further distinguished into absolute and relative ones. The new concepts of set commensurability and rod commensurability are introduced too. 3. The existence of a plurality of social actors, with interest in the policy being assessed, causes that social decisions involve multiple types of values, of which economic efficiency is only one. Therefore it is misleading to make social decisions based only on that one value. 4. Weak comparability of values, which is grounded on incommensurability, is proved to be the main methodological foundation of policy evaluation in the framework of well-being economics. Incommensurability does not imply incomparability; on the contrary incommensurability is the only rational way to compare societal options under a plurality of policy objectives. 5. Weak comparability can be implemented by using multi-criteria evaluation, which is a formal framework for applied consequentialism under incommensurability. Social Multi-Criteria Evaluation, in particular, allows considering both technical and social incommensurabilities simultaneously.
Resumo:
Supported by IEEE 802.15.4 standardization activities, embedded networks have been gaining popularity in recent years. The focus of this paper is to quantify the behavior of key networking metrics of IEEE 802.15.4 beacon-enabled nodes under typical operating conditions, with the inclusion of packet retransmissions. We corrected and extended previous analyses by scrutinizing the assumptions on which the prevalent Markovian modeling is generally based. By means of a comparative study, we singled out which of the assumptions impact each of the performance metrics (throughput, delay, power consumption, collision probability, and packet-discard probability). In particular, we showed that - unlike what is usually assumed - the probability that a node senses the channel busy is not constant for all the stages of the backoff procedure and that these differences have a noticeable impact on backoff delay, packet-discard probability, and power consumption. Similarly, we showed that - again contrary to common assumption - the probability of obtaining transmission access to the channel depends on the number of nodes that is simultaneously sensing it. We evidenced that ignoring this dependence has a significant impact on the calculated values of throughput and collision probability. Circumventing these and other assumptions, we rigorously characterize, through a semianalytical approach, the key metrics in a beacon-enabled IEEE 802.15.4 system with retransmissions.
Resumo:
User generated content shared in online communities is often described using collaborative tagging systems where users assign labels to content resources. As a result, a folksonomy emerges that relates a number of tags with the resources they label and the users that have used them. In this paper we analyze the folksonomy of Freesound, an online audio clip sharing site which contains more than two million users and 150,000 user-contributed sound samplescovering a wide variety of sounds. By following methodologies taken from similar studies, we compute some metrics that characterize the folksonomy both at the globallevel and at the tag level. In this manner, we are able to betterunderstand the behavior of the folksonomy as a whole, and also obtain some indicators that can be used as metadata for describing tags themselves. We expect that such a methodology for characterizing folksonomies can be useful to support processes such as tag recommendation or automatic annotation of online resources.
Resumo:
Silver Code (SilC) was originally discovered in [1–4] for 2×2 multiple-input multiple-output (MIMO) transmission. It has non-vanishing minimum determinant 1/7, slightly lower than Golden code, but is fast-decodable, i.e., it allows reduced-complexity maximum likelihood decoding [5–7]. In this paper, we present a multidimensional trellis-coded modulation scheme for MIMO systems [11] based on set partitioning of the Silver Code, named Silver Space-Time Trellis Coded Modulation (SST-TCM). This lattice set partitioning is designed specifically to increase the minimum determinant. The branches of the outer trellis code are labeled with these partitions. Viterbi algorithm is applied for trellis decoding, while the branch metrics are computed by using a sphere-decoding algorithm. It is shown that the proposed SST-TCM performs very closely to the Golden Space-Time Trellis Coded Modulation (GST-TCM) scheme, yetwith a much reduced decoding complexity thanks to its fast-decoding property.
Resumo:
The application of correspondence analysis to square asymmetrictables is often unsuccessful because of the strong role played by thediagonal entries of the matrix, obscuring the data off the diagonal. A simplemodification of the centering of the matrix, coupled with the correspondingchange in row and column masses and row and column metrics, allows the tableto be decomposed into symmetric and skew--symmetric components, which canthen be analyzed separately. The symmetric and skew--symmetric analyses canbe performed using a simple correspondence analysis program if the data areset up in a special block format.
Resumo:
Organizations often face the challenge of communicating their strategiesto local decision makers. The difficulty presents itself in finding away to measure performance wich meaningfully conveys how to implement theorganization's strategy at local levels. I show that organizations solvethis communication problem by combining performance measures in such away that performance gains come closest to mimicking value-added asdefined by the organization's strategy. I further show how organizationsrebalance performance measures in response to changes in their strategies.Applications to the design of performance metrics, gaming, and divisionalperformance evaluation are considered. The paper also suggests severalempirical ways to evaluate the practical importance of the communicationrole of measurement systems.
Resumo:
In this paper we explore the mechanisms that allow securities analysts to value companies in contexts of Knightian uncertainty, that is, in the face of information that is unclear, subject to unforeseeable contingencies or to multiple interpretations. We address this question with a grounded-theory analysis of the reports written on Amazon.com by securities analyst Henry Blodget and rival analysts during the years 1998-2000. Our core finding is that analysts' reports are structured by internally consistent associations that includecategorizations, key metrics and analogies. We refer to these representations as calculative frames, and propose that analysts function as frame-makers - that is, asspecialized intermediaries that help investors value uncertain stocks. We conclude by considering the implications of frame-making for the rise of new industry categories, analysts' accuracy, and the regulatory debate on analysts'independence.
Resumo:
Se analiza el uso de estadísticas e indicadores de rendimiento de productos y servicios electrónicos en los procesos de evaluación bibliotecaria. Se examinan los principales proyectos de definición de estadísticas e indicadores desarrollados durante los últimos años, prestando especial atención a tres de ellos: Counter, E-metrics e ISO, y se analizan las estadísticas que actualmente ofrecen cuatro grandes editores de revistas electrónicas (American Chemical Society, Emerald, Kluwer y Wiley) y un servicio (Scitation Usage Statistics) que aglutina datos de seis editores de revistas de física. Los resultados muestran un cierto grado de consenso en la determinación de un conjunto básico de estadísticas e indicadores a pesar de la diversidad de proyectos existentes y de la heterogeneidad de datos ofrecidos por los editores.
Resumo:
We develop a full theoretical approach to clustering in complex networks. A key concept is introduced, the edge multiplicity, that measures the number of triangles passing through an edge. This quantity extends the clustering coefficient in that it involves the properties of two¿and not just one¿vertices. The formalism is completed with the definition of a three-vertex correlation function, which is the fundamental quantity describing the properties of clustered networks. The formalism suggests different metrics that are able to thoroughly characterize transitive relations. A rigorous analysis of several real networks, which makes use of this formalism and the metrics, is also provided. It is also found that clustered networks can be classified into two main groups: the weak and the strong transitivity classes. In the first class, edge multiplicity is small, with triangles being disjoint. In the second class, edge multiplicity is high and so triangles share many edges. As we shall see in the following paper, the class a network belongs to has strong implications in its percolation properties.
Resumo:
We develop a statistical theory to characterize correlations in weighted networks. We define the appropriate metrics quantifying correlations and show that strictly uncorrelated weighted networks do not exist due to the presence of structural constraints. We also introduce an algorithm for generating maximally random weighted networks with arbitrary P(k,s) to be used as null models. The application of our measures to real networks reveals the importance of weights in a correct understanding and modeling of these heterogeneous systems.
Resumo:
Generalized KerrSchild space-times for a perfect-fluid source are investigated. New Petrov type D perfect fluid solutions are obtained starting from conformally flat perfect-fluid metrics.
Resumo:
Petrov types D and II perfect-fluid solutions are obtained starting from conformally flat perfect-fluid metrics and by using a generalized KerrSchild ansatz. Most of the Petrov type D metrics obtained have the property that the velocity of the fluid does not lie in the two-space defined by the principal null directions of the Weyl tensor. The properties of the perfect-fluid sources are studied. Finally, a detailed analysis of a new class of spherically symmetric static perfect-fluid metrics is given.
Resumo:
Abstract. The ability of 2 Rapid Bioassessment Protocols (RBPs) to assess stream water quality was compared in 2 Mediterranean-climate regions. The most commonly used RBPs in South Africa (SAprotocol) and the Iberian Peninsula (IB-protocol) are both multihabitat, field-based methods that use macroinvertebrates. Both methods use preassigned sensitivity weightings to calculate metrics and biotic indices. The SA- and IB-protocols differ with respect to sampling equipment (mesh size: 1000 lm vs 250 300 lm, respectively), segregation of habitats (substrate vs flow-type), and sampling and sorting procedures (variable time and intensity). Sampling was undertaken at 6 sites in South Africa and 5 sites in the Iberian Peninsula. Forty-four and 51 macroinvertebrate families were recorded in South Africa and the Iberian Peninsula, respectively; 77.3% of South African families and 74.5% of Iberian Peninsula families were found using both protocols. Estimates of community similarity compared between the 2 protocols were .60% similar among sites in South Africa and .54% similar among sites in the Iberian Peninsula (BrayCurtis similarity), and no significant differences were found between protocols (Multiresponse Permutation Procedure). Ordination based on Non-metric Multidimensional Scaling grouped macroinvertebrate samples on the basis of site rather than protocol. Biotic indices generated with the 2 protocols at each site did not differ. Thus, both RBPs produced equivalent results, and both were able to distinguish between biotic communities (mountain streams vs foothills) and detect water-quality impairment, regardless of differences in sampling equipment, segregation of habitats, and sampling and sorting procedures. Our results indicate that sampling a single habitat may be sufficient for assessing water quality, but a multihabitat approach to sampling is recommended where intrinsic variability of macroinvertebrate assemblages is high (e.g., in undisturbed sites in regions with Mediterranean climates). The RBP of choice should depend on whether the objective is routine biomonitoring of water quality or autecological or faunistic studies.
Resumo:
Image registration has been proposed as an automatic method for recovering cardiac displacement fields from Tagged Magnetic Resonance Imaging (tMRI) sequences. Initially performed as a set of pairwise registrations, these techniques have evolved to the use of 3D+t deformation models, requiring metrics of joint image alignment (JA). However, only linear combinations of cost functions defined with respect to the first frame have been used. In this paper, we have applied k-Nearest Neighbors Graphs (kNNG) estimators of the -entropy (H ) to measure the joint similarity between frames, and to combine the information provided by different cardiac views in an unified metric. Experiments performed on six subjects showed a significantly higher accuracy (p < 0.05) with respect to a standard pairwise alignment (PA) approach in terms of mean positional error and variance with respect to manually placed landmarks. The developed method was used to study strains in patients with myocardial infarction, showing a consistency between strain, infarction location, and coronary occlusion. This paper also presentsan interesting clinical application of graph-based metric estimators, showing their value for solving practical problems found in medical imaging.