21 resultados para Metrics of managment
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Combining data from multiple analytical platforms is essential for comprehensive study of the molecular phenotype (metabotype) of a given biological sample. The metabolite profiles generated are intrinsically dependent on the analytical platforms, each requiring optimization of instrumental parameters, separation conditions, and sample extraction to deliver maximal biological information. An in-depth evaluation of extraction protocols for characterizing the metabolome of the hepatobiliary fluke Fasciola hepatica, using ultra performance liquid chromatography and capillary electrophoresis coupled with mass spectroscopy is presented. The spectrometric methods were characterized by performance, and metrics of merit were established, including precision, mass accuracy, selectivity, sensitivity, and platform stability. Although a core group of molecules was common to all methods, each platform contributed a unique set, whereby 142 metabolites out of 14,724 features were identified. A mixture design revealed that the chloroform:methanol:water proportion of 15:59:26 was globally the best composition for metabolite extraction across UPLC-MS and CE-MS platforms accommodating different columns and ionization modes. Despite the general assumption of the necessity of platform-adapted protocols for achieving effective metabotype characterization, we show that an appropriately designed single extraction procedure is able to fit the requirements of all technologies. This may constitute a paradigm shift in developing efficient protocols for high-throughput metabolite profiling with more-general analytical applicability.
Resumo:
The classification of texts has become a major endeavor with so much electronic material available, for it is an essential task in several applications, including search engines and information retrieval. There are different ways to define similarity for grouping similar texts into clusters, as the concept of similarity may depend on the purpose of the task. For instance, in topic extraction similar texts mean those within the same semantic field, whereas in author recognition stylistic features should be considered. In this study, we introduce ways to classify texts employing concepts of complex networks, which may be able to capture syntactic, semantic and even pragmatic features. The interplay between various metrics of the complex networks is analyzed with three applications, namely identification of machine translation (MT) systems, evaluation of quality of machine translated texts and authorship recognition. We shall show that topological features of the networks representing texts can enhance the ability to identify MT systems in particular cases. For evaluating the quality of MT texts, on the other hand, high correlation was obtained with methods capable of capturing the semantics. This was expected because the golden standards used are themselves based on word co-occurrence. Notwithstanding, the Katz similarity, which involves semantic and structure in the comparison of texts, achieved the highest correlation with the NIST measurement, indicating that in some cases the combination of both approaches can improve the ability to quantify quality in MT. In authorship recognition, again the topological features were relevant in some contexts, though for the books and authors analyzed good results were obtained with semantic features as well. Because hybrid approaches encompassing semantic and topological features have not been extensively used, we believe that the methodology proposed here may be useful to enhance text classification considerably, as it combines well-established strategies. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
Creating high-quality quad meshes from triangulated surfaces is a highly nontrivial task that necessitates consideration of various application specific metrics of quality. In our work, we follow the premise that automatic reconstruction techniques may not generate outputs meeting all the subjective quality expectations of the user. Instead, we put the user at the center of the process by providing a flexible, interactive approach to quadrangulation design. By combining scalar field topology and combinatorial connectivity techniques, we present a new framework, following a coarse to fine design philosophy, which allows for explicit control of the subjective quality criteria on the output quad mesh, at interactive rates. Our quadrangulation framework uses the new notion of Reeb atlas editing, to define with a small amount of interactions a coarse quadrangulation of the model, capturing the main features of the shape, with user prescribed extraordinary vertices and alignment. Fine grain tuning is easily achieved with the notion of connectivity texturing, which allows for additional extraordinary vertices specification and explicit feature alignment, to capture the high-frequency geometries. Experiments demonstrate the interactivity and flexibility of our approach, as well as its ability to generate quad meshes of arbitrary resolution with high-quality statistics, while meeting the user's own subjective requirements.
Resumo:
The realization that statistical physics methods can be applied to analyze written texts represented as complex networks has led to several developments in natural language processing, including automatic summarization and evaluation of machine translation. Most importantly, so far only a few metrics of complex networks have been used and therefore there is ample opportunity to enhance the statistics-based methods as new measures of network topology and dynamics are created. In this paper, we employ for the first time the metrics betweenness, vulnerability and diversity to analyze written texts in Brazilian Portuguese. Using strategies based on diversity metrics, a better performance in automatic summarization is achieved in comparison to previous work employing complex networks. With an optimized method the Rouge score (an automatic evaluation method used in summarization) was 0.5089, which is the best value ever achieved for an extractive summarizer with statistical methods based on complex networks for Brazilian Portuguese. Furthermore, the diversity metric can detect keywords with high precision, which is why we believe it is suitable to produce good summaries. It is also shown that incorporating linguistic knowledge through a syntactic parser does enhance the performance of the automatic summarizers, as expected, but the increase in the Rouge score is only minor. These results reinforce the suitability of complex network methods for improving automatic summarizers in particular, and treating text in general. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Estimators of home-range size require a large number of observations for estimation and sparse data typical of tropical studies often prohibit the use of such estimators. An alternative may be use of distance metrics as indexes of home range. However, tests of correlation between distance metrics and home-range estimators only exist for North American rodents. We evaluated the suitability of 3 distance metrics (mean distance between successive captures [SD], observed range length [ORL], and mean distance between all capture points [AD]) as indexes for home range for 2 Brazilian Atlantic forest rodents, Akodon montensis (montane grass mouse) and Delomys sublineatus (pallid Atlantic forest rat). Further, we investigated the robustness of distance metrics to low numbers of individuals and captures per individual. We observed a strong correlation between distance metrics and the home-range estimator. None of the metrics was influenced by the number of individuals. ORL presented a strong dependence on the number of captures per individual. Accuracy of SD and AD was not dependent on number of captures per individual, but precision of both metrics was low with numbers of captures below 10. We recommend the use of SD and AD instead of ORL and use of caution in interpretation of results based on trapping data with low captures per individual.
Resumo:
Methods from statistical physics, such as those involving complex networks, have been increasingly used in the quantitative analysis of linguistic phenomena. In this paper, we represented pieces of text with different levels of simplification in co-occurrence networks and found that topological regularity correlated negatively with textual complexity. Furthermore, in less complex texts the distance between concepts, represented as nodes, tended to decrease. The complex networks metrics were treated with multivariate pattern recognition techniques, which allowed us to distinguish between original texts and their simplified versions. For each original text, two simplified versions were generated manually with increasing number of simplification operations. As expected, distinction was easier for the strongly simplified versions, where the most relevant metrics were node strength, shortest paths and diversity. Also, the discrimination of complex texts was improved with higher hierarchical network metrics, thus pointing to the usefulness of considering wider contexts around the concepts. Though the accuracy rate in the distinction was not as high as in methods using deep linguistic knowledge, the complex network approach is still useful for a rapid screening of texts whenever assessing complexity is essential to guarantee accessibility to readers with limited reading ability. Copyright (c) EPLA, 2012
Resumo:
We study local rigidity and multiplicity of constant scalar curvature metrics in arbitrary products of compact manifolds. Using (equivariant) bifurcation theory we determine the existence of infinitely many metrics that are accumulation points of pairwise non-homothetic solutions of the Yamabe problem. Using local rigidity and some compactness results for solutions of the Yamabe problem, we also exhibit new examples of conformal classes (with positive Yamabe constant) for which uniqueness holds. (C) 2011 Elsevier Masson SAS. All rights reserved.
Resumo:
Content-based image retrieval is still a challenging issue due to the inherent complexity of images and choice of the most discriminant descriptors. Recent developments in the field have introduced multidimensional projections to burst accuracy in the retrieval process, but many issues such as introduction of pattern recognition tasks and deeper user intervention to assist the process of choosing the most discriminant features still remain unaddressed. In this paper, we present a novel framework to CBIR that combines pattern recognition tasks, class-specific metrics, and multidimensional projection to devise an effective and interactive image retrieval system. User interaction plays an essential role in the computation of the final multidimensional projection from which image retrieval will be attained. Results have shown that the proposed approach outperforms existing methods, turning out to be a very attractive alternative for managing image data sets.
Resumo:
Background Androgen suppression therapy and radiotherapy are used to treat locally advanced prostate cancer. 3 years of androgen suppression confers a small survival benefit compared with 6 months of therapy in this setting, but is associated with more toxic effects. Early identification of men in whom radiotherapy and 6 months of androgen suppression is insufficient for cure is important. Thus, we assessed whether prostate-specific antigen (PSA) values can act as an early surrogate for prostate cancer-specific mortality (PCSM). Methods We systematically reviewed randomised controlled trials that showed improved overall and prostate cancer-specific survival with radiotherapy and 6 months of androgen suppression compared with radio therapy alone and measured lowest PSA concentrations (PSA nadir) and those immediately after treatment (PSA end). We assessed a cohort of 734 men with localised or locally advanced prostate cancer from two eligible trials in the USA and Australasia that randomly allocated participants between Feb 2, 1996, and Dec 27, 2001. We used Prentice criteria to assess whether reported PSA nadir or PSA end concentrations of more than 0.5 ng/mL were surrogates for PCSM. Findings Men treated with radiotherapy and 6 months of androgen suppression in both trials were significantly less likely to have PSA end and PSA nadir values of more than 0.5 ng/mL than were those treated with radiotherapy alone (p<0.0001). Presence of candidate surrogates (ie, PSA end and PSA nadir values >0.5 ng/mL) alone and when assessed in conjunction with the randomised treatment group increased risk of PCSM in the US trial (PSA nadir p=0.0016; PSA end p=0.017) and Australasian trial (PSA nadir p<0.0001; PSA end p=0.0012). In both trials, the randomised treatment group was no longer associated with PCSM (p >= 0.20) when the candidate surrogates were included in the model. Therefore, both PSA metrics satisfied Prentice criteria for surrogacy. Interpretation After radiotherapy and 6 months of androgen suppression, men with PSA end values exceeding 0.5 ng/mL should be considered for long-term androgen suppression and those with localised or locally advanced prostate cancer with PSA nadir values exceeding 0.5 ng/mL should be considered for inclusion in randomised trials investigating the use of drugs that have extended survival in castration-resistant metastatic prostate cancer.
Resumo:
There is a wide range of video services over complex transmission networks, and in some cases end users fail to receive an acceptable quality level. In this paper, the different factors that degrade users' quality of experience (QoE) in video streaming service that use TCP as transmission protocol are studied. In this specific service, impairment factors are: number of pauses, their duration and temporal location. In order to measure the effect that each temporal segment has in the overall video quality, subjective tests. Because current subjective test methodologies are not adequate to assess video streaming over TCP, some recommendations are provided here. At the application layer, a customized player is used to evaluate the behavior of player buffer, and consequently, the end user QoE. Video subjective test results demonstrate that there is a close correlation between application parameters and subjective scores. Based on this fact, a new metrics named VsQM is defined, which considers the importance of temporal location of pauses to assess the user QoE of video streaming service. A useful application scenario is also presented, in which the metrics proposed herein is used to improve video services(1).
Resumo:
There is a wide range of telecommunications services that transmit voice, video and data through complex transmission networks and in some cases, the service has not an acceptable quality level for the end user. In this sense the study of methods for assessing video quality and voice have a very important role. This paper presents a classification scheme, based on different criteria, of the methods and metrics that are being studied in recent years. This paper presents how the video quality is affected by degradation in the transmission channel in two kinds of services: Digital TV (ISDB-TB) due the fading in the air interface and video streaming service on an IP network due packet loss. For Digital TV tests was set up a scenario where the digital TV transmitter is connected to an RF channel emulator, where are inserted different fading models and at the end, the videos are saved in a mobile device. The tests of streaming video were performed in an isolated scenario of IP network, which are scheduled several network conditions, resulting in different qualities of video reception. The video quality assessment is performed using objective assessment methods: PSNR, SSIM and VQM. The results show how the losses in the transmission channel affects the quality of end-user experience on both services studied.
Resumo:
Effects of roads on wildlife and its habitat have been measured using metrics, such as the nearest road distance, road density, and effective mesh size. In this work we introduce two new indices: (1) Integral Road Effect (IRE), which measured the sum effects of points in a road at a fixed point in the forest; and (2) Average Value of the Infinitesimal Road Effect (AVIRE), which measured the average of the effects of roads at this point. IRE is formally defined as the line integral of a special function (the infinitesimal road effect) along the curves that model the roads, whereas AVIRE is the quotient of IRE by the length of the roads. Combining tools of ArcGIS software with a numerical algorithm, we calculated these and other road and habitat cover indices in a sample of points in a human-modified landscape in the Brazilian Atlantic Forest, where data on the abundance of two groups of small mammals (forest specialists and habitat generalists) were collected in the field. We then compared through the Akaike Information Criterion (AIC) a set of candidate regression models to explain the variation in small mammal abundance, including models with our two new road indices (AVIRE and IRE) or models with other road effect indices (nearest road distance, mesh size, and road density), and reference models (containing only habitat indices, or only the intercept without the effect of any variable). Compared to other road effect indices, AVIRE showed the best performance to explain abundance of forest specialist species, whereas the nearest road distance obtained the best performance to generalist species. AVIRE and habitat together were included in the best model for both small mammal groups, that is, higher abundance of specialist and generalist small mammals occurred where there is lower average road effect (less AVIRE) and more habitat. Moreover, AVIRE was not significantly correlated with habitat cover of specialists and generalists differing from the other road effect indices, except mesh size, which allows for separating the effect of roads from the effect of habitat on small mammal communities. We suggest that the proposed indices and GIS procedures could also be useful to describe other spatial ecological phenomena, such as edge effect in habitat fragments. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
1. A long-standing question in ecology is how natural populations respond to a changing environment. Emergent optimal foraging theory-based models for individual variation go beyond the population level and predict how its individuals would respond to disturbances that produce changes in resource availability. 2. Evaluating variations in resource use patterns at the intrapopulation level in wild populations under changing environmental conditions would allow to further advance in the research on foraging ecology and evolution by gaining a better idea of the underlying mechanisms explaining trophic diversity. 3. In this study, we use a large spatio-temporal scale data set (western continental Europe, 19682006) on the diet of Bonellis Eagle Aquila fasciata breeding pairs to analyse the predator trophic responses at the intrapopulation level to a prey population crash. In particular, we borrow metrics from studies on network structure and intrapopulation variation to understand how an emerging infectious disease [the rabbit haemorrhagic disease (RHD)] that caused the density of the eagles primary prey (rabbit Oryctolagus cuniculus) to dramatically drop across Europe impacted on resource use patterns of this endangered raptor. 4. Following the major RHD outbreak, substantial changes in Bonellis Eagles diet diversity and organisation patterns at the intrapopulation level took place. Dietary variation among breeding pairs was larger after than before the outbreak. Before RHD, there were no clusters of pairs with similar diets, but significant clustering emerged after RHD. Moreover, diets at the pair level presented a nested pattern before RHD, but not after. 5. Here, we reveal how intrapopulation patterns of resource use can quantitatively and qualitatively vary, given drastic changes in resource availability. 6. For the first time, we show that a pathogen of a prey species can indirectly impact the intrapopulation patterns of resource use of an endangered predator.
Resumo:
The use of statistical methods to analyze large databases of text has been useful in unveiling patterns of human behavior and establishing historical links between cultures and languages. In this study, we identified literary movements by treating books published from 1590 to 1922 as complex networks, whose metrics were analyzed with multivariate techniques to generate six clusters of books. The latter correspond to time periods coinciding with relevant literary movements over the last five centuries. The most important factor contributing to the distinctions between different literary styles was the average shortest path length, in particular the asymmetry of its distribution. Furthermore, over time there has emerged a trend toward larger average shortest path lengths, which is correlated with increased syntactic complexity, and a more uniform use of the words reflected in a smaller power-law coefficient for the distribution of word frequency. Changes in literary style were also found to be driven by opposition to earlier writing styles, as revealed by the analysis performed with geometrical concepts. The approaches adopted here are generic and may be extended to analyze a number of features of languages and cultures.
Resumo:
Using recent results on the compactness of the space of solutions of the Yamabe problem, we show that in conformal classes of metrics near the class of a nondegenerate solution which is unique (up to scaling) the Yamabe problem has a unique solution as well. This provides examples of a local extension, in the space of conformal classes, of a well-known uniqueness criterion due to Obata.