824 resultados para decentralised data fusion framework
Resumo:
Resource specialisation, although a fundamental component of ecological theory, is employed in disparate ways. Most definitions derive from simple counts of resource species. We build on recent advances in ecophylogenetics and null model analysis to propose a concept of specialisation that comprises affinities among resources as well as their co-occurrence with consumers. In the distance-based specialisation index (DSI), specialisation is measured as relatedness (phylogenetic or otherwise) of resources, scaled by the null expectation of random use of locally available resources. Thus, specialists use significantly clustered sets of resources, whereas generalists use over-dispersed resources. Intermediate species are classed as indiscriminate consumers. The effectiveness of this approach was assessed with differentially restricted null models, applied to a data set of 168 herbivorous insect species and their hosts. Incorporation of plant relatedness and relative abundance greatly improved specialisation measures compared to taxon counts or simpler null models, which overestimate the fraction of specialists, a problem compounded by insufficient sampling effort. This framework disambiguates the concept of specialisation with an explicit measure applicable to any mode of affinity among resource classes, and is also linked to ecological and evolutionary processes. This will enable a more rigorous deployment of ecological specialisation in empirical and theoretical studies.
Resumo:
Often in biomedical research, we deal with continuous (clustered) proportion responses ranging between zero and one quantifying the disease status of the cluster units. Interestingly, the study population might also consist of relatively disease-free as well as highly diseased subjects, contributing to proportion values in the interval [0, 1]. Regression on a variety of parametric densities with support lying in (0, 1), such as beta regression, can assess important covariate effects. However, they are deemed inappropriate due to the presence of zeros and/or ones. To evade this, we introduce a class of general proportion density, and further augment the probabilities of zero and one to this general proportion density, controlling for the clustering. Our approach is Bayesian and presents a computationally convenient framework amenable to available freeware. Bayesian case-deletion influence diagnostics based on q-divergence measures are automatic from the Markov chain Monte Carlo output. The methodology is illustrated using both simulation studies and application to a real dataset from a clinical periodontology study.
Resumo:
Background: High-density tiling arrays and new sequencing technologies are generating rapidly increasing volumes of transcriptome and protein-DNA interaction data. Visualization and exploration of this data is critical to understanding the regulatory logic encoded in the genome by which the cell dynamically affects its physiology and interacts with its environment. Results: The Gaggle Genome Browser is a cross-platform desktop program for interactively visualizing high-throughput data in the context of the genome. Important features include dynamic panning and zooming, keyword search and open interoperability through the Gaggle framework. Users may bookmark locations on the genome with descriptive annotations and share these bookmarks with other users. The program handles large sets of user-generated data using an in-process database and leverages the facilities of SQL and the R environment for importing and manipulating data. A key aspect of the Gaggle Genome Browser is interoperability. By connecting to the Gaggle framework, the genome browser joins a suite of interconnected bioinformatics tools for analysis and visualization with connectivity to major public repositories of sequences, interactions and pathways. To this flexible environment for exploring and combining data, the Gaggle Genome Browser adds the ability to visualize diverse types of data in relation to its coordinates on the genome. Conclusions: Genomic coordinates function as a common key by which disparate biological data types can be related to one another. In the Gaggle Genome Browser, heterogeneous data are joined by their location on the genome to create information-rich visualizations yielding insight into genome organization, transcription and its regulation and, ultimately, a better understanding of the mechanisms that enable the cell to dynamically respond to its environment.
Resumo:
Background: The thin-spined porcupine, also known as the bristle-spined rat, Chaetomys subspinosus (Olfers, 1818), the only member of its genus, figures among Brazilian endangered species. In addition to being threatened, it is poorly known, and even its taxonomic status at the family level has long been controversial. The genus Chaetomys was originally regarded as a porcupine in the family Erethizontidae, but some authors classified it as a spiny-rat in the family Echimyidae. Although the dispute seems to be settled in favor of the erethizontid advocates, further discussion of its affinities should be based on a phylogenetic framework. In the present study, we used nucleotide-sequence data from the complete mitochondrial cytochrome b gene and karyotypic information to address this issue. Our molecular analyses included one individual of Chaetomys subspinosus from the state of Bahia in northeastern Brazil, and other hystricognaths. Results: All topologies recovered in our molecular phylogenetic analyses strongly supported Chaetomys subspinosus as a sister clade of the erethizontids. Cytogenetically, Chaetomys subspinosus showed 2n = 52 and FN = 76. Although the sexual pair could not be identified, we assumed that the X chromosome is biarmed. The karyotype included 13 large to medium metacentric and submetacentric chromosome pairs, one small subtelocentric pair, and 12 small acrocentric pairs. The subtelocentric pair 14 had a terminal secondary constriction in the short arm, corresponding to the nucleolar organizer region (Ag-NOR), similar to the erethizontid Sphiggurus villosus, 2n = 42 and FN = 76, and different from the echimyids, in which the secondary constriction is interstitial. Conclusion: Both molecular phylogenies and karyotypical evidence indicated that Chaetomys is closely related to the Erethizontidae rather than to the Echimyidae, although in a basal position relative to the rest of the Erethizontidae. The high levels of molecular and morphological divergence suggest that Chaetomys belongs to an early radiation of the Erethizontidae that may have occurred in the Early Miocene, and should be assigned to its own subfamily, the Chaetomyinae.
Resumo:
Excitation functions of quasi-elastic scattering at backward angles have been measured for the (6,7)Li + (144)Sm systems at near-barrier energies, and fusion barrier distributions have been extracted from the first derivatives of the experimental cross sections with respect to the bombarding energies. The data have been analyzed in the framework of continuum discretized coupled-channel calculations, and the results have been obtained in terms of the influence exerted by the inclusion of different reaction channels, with emphasis on the role played by the projectile breakup.
Resumo:
The MINOS experiment at Fermilab has recently reported a tension between the oscillation results for neutrinos and antineutrinos. We show that this tension, if it persists, can be understood in the framework of nonstandard neutrino interactions (NSI). While neutral current NSI (nonstandard matter effects) are disfavored by atmospheric neutrinos, a new charged current coupling between tau neutrinos and nucleons can fit the MINOS data without violating other constraints. In particular, we show that loop-level contributions to flavor-violating tau decays are sufficiently suppressed. However, conflicts with existing bounds could arise once the effective theory considered here is embedded into a complete renormalizable model. We predict the future sensitivity of the T2K and NOvA experiments to the NSI parameter region favored by the MINOS fit, and show that both experiments are excellent tools to test the NSI interpretation of the MINOS data.
Resumo:
Thermodynamic properties of bread dough (fusion enthalpy, apparent specific heat, initial freezing point and unfreezable water) were measured at temperatures from -40 degrees C to 35 degrees C using differential scanning calorimetry. The initial freezing point was also calculated based on the water activity of dough. The apparent specific heat varied as a function of temperature: specific heat in the freezing region varied from (1.7-23.1) J g(-1) degrees C(-1), and was constant at temperatures above freezing (2.7 J g(-1) degrees C(-1)). Unfreezable water content varied from (0.174-0.182) g/g of total product. Values of heat capacity as a function of temperature were correlated using thermodynamic models. A modification for low-moisture foodstuffs (such as bread dough) was successfully applied to the experimental data. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.
Resumo:
Recent structural studies of proteins mediating membrane fusion reveal intriguing similarities between diverse viral and mammalian systems. Particularly striking is the close similarity between the transmembrane envelope glycoproteins from the retrovirus HTLV-1 and the filovirus Ebola. These similarities suggest similar mechanisms of membrane fusion. The model that fits most currently available data suggests fusion activation in viral systems is driven by a symmetrical conformational change triggered by an activation event such as receptor binding or a pH change. The mammalian vesicle fusion mediated by the SNARE protein complex most likely occurs by a similar mechanism but without symmetry constraints.
Resumo:
The focus for interventions and research on physical activity has moved away from vigorous activity to moderate-intensity activities, such as walking. In addition, a social ecological approach to physical activity research and practice is recommended. This approach considers the influence of the environment and policies on physical activity. Although there is limited empirical published evidence related to the features of the physical environment that influence physical activity, urban planning and transport agencies have developed policies and strategies that have the potential to influence whether people walk or cycle in their neighbourhood. This paper presents the development of a framework of the potential environmental influences on walking and cycling based on published evidence and policy literature, interviews with experts and a Delphi study. The framework includes four features: functional, safety, aesthetic and destination; as well as the hypothesised factors that contribute to each of these features of the environment. In addition, the Delphi experts determined the perceived relative importance of these factors. Based on these factors, a data collection tool will be developed and the frameworks will be tested through the collection of environmental information on neighbourhoods, where data on the walking and cycling patterns have been collected previously. Identifying the environmental factors that influence walking and cycling will allow the inclusion of a public health perspective as well as those of urban planning and transport in the design of built environments. (C) 2002 Elsevier Science Ltd., All rights reserved.
Resumo:
Purpose - The purpose of this paper is to provide a framework for radio frequency identification (RFID) technology adoption considering company size and five dimensions of analysis: RFID applications, expected benefits business drivers or motivations barriers and inhibitors, and organizational factors. Design/methodology/approach - A framework for RFID adoption derived from literature and the practical experience on the subject is developed. This framework provides a conceptual basis for analyzing a survey conducted with 114 companies in Brazil. Findings - Many companies have been developing RFID initiatives in order to identify potential applications and map benefits associated with their implementation. The survey highlights the importance business drivers in the RFID implementation stage, and that companies implement RFID focusing on a few specific applications. However, there is a weak association between expected benefits and business challenges with the current level of RFID technology adoption in Brazil. Research limitations/implications - The paper is not exhaustive, since RFID adoption in Brazil is at early stages during the survey timeline. Originality/value - The main contribution of the paper is that it yields a framework for analyzing RFID technology adoption. The authors use this framework to analyze RFID adoption in Brazil, which proved to be a useful one for identifying key issues for technology adoption. The paper is useful to any researchers or practitioners who are focused on technology adoption, in particular, RFID technology.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Study Design. Systematic Review. Objectives. To assess the effects of massage therapy for nonspecific low back pain. Summary of Background Data. Low back pain is one of the most common and costly musculoskeletal problems in modern society. Proponents of massage therapy claim it can minimize pain and disability, and speed return to normal function. Methods. We searched MEDLINE, EMBASE, CINAHL from their beginning to May 2008. We also searched the Cochrane Central Register of Controlled Trials (The Cochrane Library 2006, issue 3), HealthSTAR and Dissertation abstracts up to 2006. There were no language restrictions. References in the included studies and in reviews of the literature were screened. The studies had to be randomized or quasi-randomized trials investigating the use of any type of massage (using the hands or a mechanical device) as a treatment for nonspecific low back pain. Two review authors selected the studies, assessed the risk of bias using the criteria recommended by the Cochrane Back Review Group, and extracted the data using standardized forms. Both qualitative and meta-analyses were performed. Results. Thirteen randomized trials were included. Eight had a high risk and 5 had a low risk of bias. One study was published in German and the rest in English. Massage was compared to an inert therapy (sham treatment) in 2 studies that showed that massage was superior for pain and function on both short- and long-term follow-ups. In 8 studies, massage was compared to other active treatments. They showed that massage was similar to exercises, and massage was superior to joint mobilization, relaxation therapy, physical therapy, acupuncture, and self-care education. One study showed that reflexology on the feet had no effect on pain and functioning. The beneficial effects of massage in patients with chronic low back pain lasted at least 1 year after the end of the treatment. Two studies compared 2 different techniques of massage. One concluded that acupuncture massage produces better results than classic (Swedish) massage and another concluded that Thai massage produces similar results to classic (Swedish) massage. Conclusion. Massage might be beneficial for patients with subacute and chronic nonspecific low back pain, especially when combined with exercises and education. The evidence suggests that acupuncture massage is more effective than classic massage, but this need confirmation. More studies are needed to confirm these conclusions, to assess the impact of massage on return-to-work, and to determine cost-effectiveness of massage as an intervention for low back pain.
Resumo:
Test templates and a test template framework are introduced as useful concepts in specification-based testing. The framework can be defined using any model-based specification notation and used to derive tests from model-based specifications-in this paper, it is demonstrated using the Z notation. The framework formally defines test data sets and their relation to the operations in a specification and to other test data sets, providing structure to the testing process. Flexibility is preserved, so that many testing strategies can be used. Important application areas of the framework are discussed, including refinement of test data, regression testing, and test oracles.
Wavelet correlation between subjects: A time-scale data driven analysis for brain mapping using fMRI
Resumo:
Functional magnetic resonance imaging (fMRI) based on BOLD signal has been used to indirectly measure the local neural activity induced by cognitive tasks or stimulation. Most fMRI data analysis is carried out using the general linear model (GLM), a statistical approach which predicts the changes in the observed BOLD response based on an expected hemodynamic response function (HRF). In cases when the task is cognitively complex or in cases of diseases, variations in shape and/or delay may reduce the reliability of results. A novel exploratory method using fMRI data, which attempts to discriminate between neurophysiological signals induced by the stimulation protocol from artifacts or other confounding factors, is introduced in this paper. This new method is based on the fusion between correlation analysis and the discrete wavelet transform, to identify similarities in the time course of the BOLD signal in a group of volunteers. We illustrate the usefulness of this approach by analyzing fMRI data from normal subjects presented with standardized human face pictures expressing different degrees of sadness. The results show that the proposed wavelet correlation analysis has greater statistical power than conventional GLM or time domain intersubject correlation analysis. (C) 2010 Elsevier B.V. All rights reserved.