29 resultados para least common subgraph algorithm
Resumo:
We investigated relationships between richness patterns of rare and common grassland species and environmental factors, focussing on comparing the degree to which the richness patterns of rare and common species are determined by simple environmental variables. Using data collected in the Machair grassland of the Outer Hebrides of Scotland, we fitted spatial regression models using a suite of grazing, soil physicochemical and microtopographic covariates, to nested sub-assemblages of vascular and non-vascular species ranked according to rarity. As expected, we found that common species drive richness patterns, but rare vascular species had significantly stronger affinity for high richness areas. After correcting for the prevalence of individual species distributions, we found differences between common and rare species in 1) the amount of variation explained: richness patterns of common species were better summarised by simple environmental variables, 2) the associations of environmental variables with richness showed systematic trends between common and rare species with coefficient sign reversal for several factors, and 3) richness associations with rare environments: richness patterns of rare vascular species significantly matched rare environments but those of non-vascular species did not. Richness patterns of rare species, at least in this system, may be intrinsically less predictable than those of common species.
Resumo:
A bit level systolic array system is proposed for the Winograd Fourier transform algorithm. The design uses bit-serial arithmetic and, in common with other systolic arrays, features nearest-neighbor interconnections, regularity and high throughput. The short interconnections in this method contrast favorably with the long interconnections between butterflies required in the FFT. The structure is well suited to VLSI implementations. It is demonstrated how long transforms can be implemented with components designed to perform a short length transform. These components build into longer transforms preserving the regularity and structure of the short length transform design.
Resumo:
A bit-level systolic array system is proposed for the Winograd Fourier transform algorithm. The design uses bit-serial arithmetic and, in common with other systolic arrays, features nearest neighbor interconnections, regularity, and high throughput. The short interconnections in this method contrast favorably with the long interconnections between butterflies required in the FFT. The structure is well suited to VLSI implementations. It is demonstrated how long transforms can be implemented with components designed to perform short-length transforms. These components build into longer transforms, preserving the regularity and structure of the short-length transform design.
Resumo:
This paper presents a novel method that leverages reasoning capabilities in a computer vision system dedicated to human action recognition. The proposed methodology is decomposed into two stages. First, a machine learning based algorithm - known as bag of words - gives a first estimate of action classification from video sequences, by performing an image feature analysis. Those results are afterward passed to a common-sense reasoning system, which analyses, selects and corrects the initial estimation yielded by the machine learning algorithm. This second stage resorts to the knowledge implicit in the rationality that motivates human behaviour. Experiments are performed in realistic conditions, where poor recognition rates by the machine learning techniques are significantly improved by the second stage in which common-sense knowledge and reasoning capabilities have been leveraged. This demonstrates the value of integrating common-sense capabilities into a computer vision pipeline. © 2012 Elsevier B.V. All rights reserved.
Resumo:
PURPOSE: To evaluate the sensitivity and specificity of the screening mode of the Humphrey-Welch Allyn frequency-doubling technology (FDT), Octopus tendency-oriented perimetry (TOP), and the Humphrey Swedish Interactive Threshold Algorithm (SITA)-fast (HSF) in patients with glaucoma. DESIGN: A comparative consecutive case series. METHODS: This was a prospective study which took place in the glaucoma unit of an academic department of ophthalmology. One eye of 70 consecutive glaucoma patients and 28 age-matched normal subjects was studied. Eyes were examined with the program C-20 of FDT, G1-TOP, and 24-2 HSF in one visit and in random order. The gold standard for glaucoma was presence of a typical glaucomatous optic disk appearance on stereoscopic examination, which was judged by a glaucoma expert. The sensitivity and specificity, positive and negative predictive value, and receiver operating characteristic (ROC) curves of two algorithms for the FDT screening test, two algorithms for TOP, and three algorithms for HSF, as defined before the start of this study, were evaluated. The time required for each test was also analyzed. RESULTS: Values for area under the ROC curve ranged from 82.5%-93.9%. The largest area (93.9%) under the ROC curve was obtained with the FDT criteria, defining abnormality as presence of at least one abnormal location. Mean test time was 1.08 ± 0.28 minutes, 2.31 ± 0.28 minutes, and 4.14 ± 0.57 minutes for the FDT, TOP, and HSF, respectively. The difference in testing time was statistically significant (P <.0001). CONCLUSIONS: The C-20 FDT, G1-TOP, and 24-2 HSF appear to be useful tools to diagnose glaucoma. The test C-20 FDT and G1-TOP take approximately 1/4 and 1/2 of the time taken by 24 to 2 HSF. © 2002 by Elsevier Science Inc. All rights reserved.
Resumo:
In distributed networks, it is often useful for the nodes to be aware of dense subgraphs, e.g., such a dense subgraph could reveal dense substructures in otherwise sparse graphs (e.g. the World Wide Web or social networks); these might reveal community clusters or dense regions for possibly maintaining good communication infrastructure. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs that the member nodes are aware of. The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that (2 + e)-approximate the densest subgraph and (3 + e)-approximate the at-least-k-densest subgraph (for a given parameter k). Our algorithms work for a wide range of parameter values and run in O(D log n) time. Further, a special case of our results also gives the first fully decentralized approximation algorithms for densest and at-least-k-densest subgraph problems for static distributed graphs. © 2012 Springer-Verlag.
Resumo:
In distributed networks, some groups of nodes may have more inter-connections, perhaps due to their larger bandwidth availability or communication requirements. In many scenarios, it may be useful for the nodes to know if they form part of a dense subgraph, e.g., such a dense subgraph could form a high bandwidth backbone for the network. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs (subgraphs that the member nodes are aware of). The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model in the sense that only messages of O(log n) size are permitted, where n is the number of nodes in the network. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that approximate both the densest subgraph, i.e., the subgraph of the highest density in the network, and the at-least-k-densest subgraph (for a given parameter k), i.e., the densest subgraph of size at least k. We give a (2 + e)-approximation algorithm for the densest subgraph problem. The at-least-k-densest subgraph is known to be NP-hard for the general case in the centralized setting and the best known algorithm gives a 2-approximation. We present an algorithm that maintains a (3+e)-approximation in our distributed, dynamic setting. Our algorithms run in O(Dlog n) time. © 2012 Authors.
Resumo:
Freshwater and brackish microalgal toxins, such as microcystins, cylindrospermopsins, paralytic toxins, anatoxins or other neurotoxins are produced during the overgrowth of certain phytoplankton and benthic cyanobacteria, which includes either prokaryotic or eukaryotic microalgae. Although, further studies are necessary to define the biological role of these toxins, at least some of them are known to be poisonous to humans and wildlife due to their occurrence in these aquatic systems. The World Health Organization (WHO) has established as provisional recommended limit 1 μg of microcystin-LR per liter of drinking water. In this work we present a microsphere-based multi-detection method for five classes of freshwater and brackish toxins: microcystin-LR (MC-LR), cylindrospermopsin (CYN), anatoxin-a (ANA-a), saxitoxin (STX) and domoic acid (DA). Five inhibition assays were developed using different binding proteins and microsphere classes coupled to a flow-cytometry Luminex system. Then, assays were combined in one method for the simultaneous detection of the toxins. The IC50's using this method were 1.9 ± 0.1 μg L−1 MC-LR, 1.3 ± 0.1 μg L−1 CYN, 61 ± 4 μg L−1 ANA-a, 5.4 ± 0.4 μg L−1 STX and 4.9 ± 0.9 μg L−1 DA. Lyophilized cyanobacterial culture samples were extracted using a simple procedure and analyzed by the Luminex method and by UPLC–IT-TOF-MS. Similar quantification was obtained by both methods for all toxins except for ANA-a, whereby the estimated content was lower when using UPLC–IT-TOF-MS. Therefore, this newly developed multiplexed detection method provides a rapid, simple, semi-quantitative screening tool for the simultaneous detection of five environmentally important freshwater and brackish toxins, in buffer and cyanobacterial extracts.
Resumo:
Recent molecular-typing studies suggest cross-infection as one of the potential acquisition pathways for Pseudomonas aeruginosa in patients with cystic fibrosis (CF). In Australia, there is only limited evidence of unrelated patients sharing indistinguishable P. aeruginosa strains. We therefore examined the point-prevalence, distribution, diversity and clinical impact of P. aeruginosa strains in Australian CF patients nationally. 983 patients attending 18 Australian CF centres provided 2887 sputum P. aeruginosa isolates for genotyping by enterobacterial repetitive intergenic consensus-PCR assays with confirmation by multilocus sequence typing. Demographic and clinical details were recorded for each participant. Overall, 610 (62%) patients harboured at least one of 38 shared genotypes. Most shared strains were in small patient clusters from a limited number of centres. However, the two predominant genotypes, AUST-01 and AUST-02, were widely dispersed, being detected in 220 (22%) and 173 (18%) patients attending 17 and 16 centres, respectively. AUST-01 was associated with significantly greater treatment requirements than unique P. aeruginosa strains. Multiple clusters of shared P. aeruginosa strains are common in Australian CF centres. At least one of the predominant and widespread genotypes is associated with increased healthcare utilisation. Longitudinal studies are now needed to determine the infection control implications of these findings.
Resumo:
A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.
Resumo:
A forward and backward least angle regression (LAR) algorithm is proposed to construct the nonlinear autoregressive model with exogenous inputs (NARX) that is widely used to describe a large class of nonlinear dynamic systems. The main objective of this paper is to improve model sparsity and generalization performance of the original forward LAR algorithm. This is achieved by introducing a replacement scheme using an additional backward LAR stage. The backward stage replaces insignificant model terms selected by forward LAR with more significant ones, leading to an improved model in terms of the model compactness and performance. A numerical example to construct four types of NARX models, namely polynomials, radial basis function (RBF) networks, neuro fuzzy and wavelet networks, is presented to illustrate the effectiveness of the proposed technique in comparison with some popular methods.
Resumo:
This theoretical paper attempts to define some of the key components and challenges required to create embodied conversational agents that can be genuinely interesting conversational partners. Wittgenstein's argument concerning talking lions emphasizes the importance of having a shared common ground as a basis for conversational interactions. Virtual bats suggests that-for some people at least-it is important that there be a feeling of authenticity concerning a subjectively experiencing entity that can convey what it is like to be that entity. Electric sheep reminds us of the importance of empathy in human conversational interaction and that we should provide a full communicative repertoire of both verbal and non-verbal components if we are to create genuinely engaging interactions. Also we may be making the task more difficult rather than easy if we leave out non-verbal aspects of communication. Finally, analogical peacocks highlights the importance of between minds alignment and establishes a longer term goal of being interesting, creative, and humorous if an embodied conversational agent is to be truly an engaging conversational partner. Some potential directions and solutions to addressing these issues are suggested.
Resumo:
This paper formulates a linear kernel support vector machine (SVM) as a regularized least-squares (RLS) problem. By defining a set of indicator variables of the errors, the solution to the RLS problem is represented as an equation that relates the error vector to the indicator variables. Through partitioning the training set, the SVM weights and bias are expressed analytically using the support vectors. It is also shown how this approach naturally extends to Sums with nonlinear kernels whilst avoiding the need to make use of Lagrange multipliers and duality theory. A fast iterative solution algorithm based on Cholesky decomposition with permutation of the support vectors is suggested as a solution method. The properties of our SVM formulation are analyzed and compared with standard SVMs using a simple example that can be illustrated graphically. The correctness and behavior of our solution (merely derived in the primal context of RLS) is demonstrated using a set of public benchmarking problems for both linear and nonlinear SVMs.
Resumo:
The incidence of melanoma has increased rapidly over the past 30 years, and the disease is now the sixth most common cancer among men and women in the U.K. Many patients are diagnosed with or develop metastatic disease, and survival is substantially reduced in these patients. Mutations in the BRAF gene have been identified as key drivers of melanoma cells and are found in around 50% of cutaneous melanomas. Vemurafenib (Zelboraf(®) ; Roche Molecular Systems Inc., Pleasanton, CA, U.S.A.) is the first licensed inhibitor of mutated BRAF, and offers a new first-line option for patients with unresectable or metastatic melanoma who harbour BRAF mutations. Vemurafenib was developed in conjunction with a companion diagnostic, the cobas(®) 4800 BRAF V600 Mutation Test. The purpose of this paper is to make evidence-based recommendations to facilitate the implementation of BRAF mutation testing and targeted therapy in patients with metastatic melanoma in the U.K. The recommendations are the result of a meeting of an expert panel and have been reviewed by melanoma specialists and representatives of the National Cancer Research Network Clinical Study Group on behalf of the wider melanoma community. This article is intended to be a starting point for practical advice and recommendations, which will no doubt be updated as we gain further experience in personalizing therapy for patients with melanoma.