990 resultados para Algorithm fusion
Resumo:
Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Fusion ARTMAP generalizes the fuzzy ARTMAP architecture in order to adaptively classify multi-channel data. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, beco1ne inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called parallel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of thmn. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network.
Resumo:
The Fuzzy ART system introduced herein incorporates computations from fuzzy set theory into ART 1. For example, the intersection (n) operator used in ART 1 learning is replaced by the MIN operator (A) of fuzzy set theory. Fuzzy ART reduces to ART 1 in response to binary input vectors, but can also learn stable categories in response to analog input vectors. In particular, the MIN operator reduces to the intersection operator in the binary case. Learning is stable because all adaptive weights can only decrease in time. A preprocessing step, called complement coding, uses on-cell and off-cell responses to prevent category proliferation. Complement coding normalizes input vectors while preserving the amplitudes of individual feature activations.
Resumo:
This article introduces ART 2-A, an efficient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architecture, but at a speed two to three orders of magnitude faster. Analysis and simulations show how the ART 2-A systems correspond to ART 2 dynamics at both the fast-learn limit and at intermediate learning rates. Intermediate learning rates permit fast commitment of category nodes but slow recoding, analogous to properties of word frequency effects, encoding specificity effects, and episodic memory. Better noise tolerance is hereby achieved without a loss of learning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes practical the use of ART 2 modules in large-scale neural computation.
Resumo:
In this thesis, extensive experiments are firstly conducted to characterize the performance of using the emerging IEEE 802.15.4-2011 ultra wideband (UWB) for indoor localization, and the results demonstrate the accuracy and precision of using time of arrival measurements for ranging applications. A multipath propagation controlling technique is synthesized which considers the relationship between transmit power, transmission range and signal-to-noise ratio. The methodology includes a novel bilateral transmitter output power control algorithm which is demonstrated to be able to stabilize the multipath channel, and enable sub 5cm instant ranging accuracy in line of sight conditions. A fully-coupled architecture is proposed for the localization system using a combination of IEEE 802.15.4-2011 UWB and inertial sensors. This architecture not only implements the position estimation of the object by fusing the UWB and inertial measurements, but enables the nodes in the localization network to mutually share positional and other useful information via the UWB channel. The hybrid system has been demonstrated to be capable of simultaneous local-positioning and remote-tracking of the mobile object. Three fusion algorithms for relative position estimation are proposed, including internal navigation system (INS), INS with UWB ranging correction, and orientation plus ranging. Experimental results show that the INS with UWB correction algorithm achieves an average position accuracy of 0.1883m, and gets 83% and 62% improvements on the accuracy of the INS (1.0994m) and the existing extended Kalman filter tracking algorithm (0.5m), respectively.
Resumo:
We revisit the well-known problem of sorting under partial information: sort a finite set given the outcomes of comparisons between some pairs of elements. The input is a partially ordered set P, and solving the problem amounts to discovering an unknown linear extension of P, using pairwise comparisons. The information-theoretic lower bound on the number of comparisons needed in the worst case is log e(P), the binary logarithm of the number of linear extensions of P. In a breakthrough paper, Jeff Kahn and Jeong Han Kim (STOC 1992) showed that there exists a polynomial-time algorithm for the problem achieving this bound up to a constant factor. Their algorithm invokes the ellipsoid algorithm at each iteration for determining the next comparison, making it impractical. We develop efficient algorithms for sorting under partial information. Like Kahn and Kim, our approach relies on graph entropy. However, our algorithms differ in essential ways from theirs. Rather than resorting to convex programming for computing the entropy, we approximate the entropy, or make sure it is computed only once in a restricted class of graphs, permitting the use of a simpler algorithm. Specifically, we present: an O(n2) algorithm performing O(log n·log e(P)) comparisons; an O(n2.5) algorithm performing at most (1+ε) log e(P) + Oε(n) comparisons; an O(n2.5) algorithm performing O(log e(P)) comparisons. All our algorithms are simple to implement. © 2010 ACM.
Resumo:
As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p < 0.02) and achieved AUC=0.85 +/- 0.01. The DF-P surpassed the other classifiers in terms of pAUC (p < 0.01) and reached pAUC=0.38 +/- 0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p < 0.04) and achieved AUC=0.94 +/- 0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57 +/- 0.07 to 0.67 +/- 0.05, p > 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p < 0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets.
Resumo:
A popular way to account for unobserved heterogeneity is to assume that the data are drawn from a finite mixture distribution. A barrier to using finite mixture models is that parameters that could previously be estimated in stages must now be estimated jointly: using mixture distributions destroys any additive separability of the log-likelihood function. We show, however, that an extension of the EM algorithm reintroduces additive separability, thus allowing one to estimate parameters sequentially during each maximization step. In establishing this result, we develop a broad class of estimators for mixture models. Returning to the likelihood problem, we show that, relative to full information maximum likelihood, our sequential estimator can generate large computational savings with little loss of efficiency.
Resumo:
While advances in regenerative medicine and vascular tissue engineering have been substantial in recent years, important stumbling blocks remain. In particular, the limited life span of differentiated cells that are harvested from elderly human donors is an important limitation in many areas of regenerative medicine. Recently, a mutant of the human telomerase reverse transcriptase enzyme (TERT) was described, which is highly processive and elongates telomeres more rapidly than conventional telomerase. This mutant, called pot1-TERT, is a chimeric fusion between the DNA binding protein pot1 and TERT. Because pot1-TERT is highly processive, it is possible that transient delivery of this transgene to cells that are utilized in regenerative medicine applications may elongate telomeres and extend cellular life span while avoiding risks that are associated with retroviral or lentiviral vectors. In the present study, adenoviral delivery of pot1-TERT resulted in transient reconstitution of telomerase activity in human smooth muscle cells, as demonstrated by telomeric repeat amplification protocol (TRAP). In addition, human engineered vessels that were cultured using pot1-TERT-expressing cells had greater collagen content and somewhat better performance in vivo than control grafts. Hence, transient delivery of pot1-TERT to elderly human cells may be useful for increasing cellular life span and improving the functional characteristics of resultant tissue-engineered constructs.
Resumo:
Understanding immune tolerance mechanisms is a major goal of immunology research, but mechanistic studies have generally required the use of mouse models carrying untargeted or targeted antigen receptor transgenes, which distort lymphocyte development and therefore preclude analysis of a truly normal immune system. Here we demonstrate an advance in in vivo analysis of immune tolerance that overcomes these shortcomings. We show that custom superantigens generated by single chain antibody technology permit the study of tolerance in a normal, polyclonal immune system. In the present study we generated a membrane-tethered anti-Igkappa-reactive single chain antibody chimeric gene and expressed it as a transgene in mice. B cell tolerance was directly characterized in the transgenic mice and in radiation bone marrow chimeras in which ligand-bearing mice served as recipients of nontransgenic cells. We find that the ubiquitously expressed, Igkappa-reactive ligand induces efficient B cell tolerance primarily or exclusively by receptor editing. We also demonstrate the unique advantages of our model in the genetic and cellular analysis of immune tolerance.
Resumo:
In most multicellular organisms, the decision to undergo programmed cell death in response to cellular damage or developmental cues is typically transmitted through mitochondria. It has been suggested that an exception is the apoptotic pathway of Drosophila melanogaster, in which the role of mitochondria remains unclear. Although IAP antagonists in Drosophila such as Reaper, Hid and Grim may induce cell death without mitochondrial membrane permeabilization, it is surprising that all three localize to mitochondria. Moreover, induction of Reaper and Hid appears to result in mitochondrial fragmentation during Drosophila cell death. Most importantly, disruption of mitochondrial fission can inhibit Reaper and Hid-induced cell death, suggesting that alterations in mitochondrial dynamics can modulate cell death in fly cells. We report here that Drosophila Reaper can induce mitochondrial fragmentation by binding to and inhibiting the pro-fusion protein MFN2 and its Drosophila counterpart dMFN/Marf. Our in vitro and in vivo analyses reveal that dMFN overexpression can inhibit cell death induced by Reaper or γ-irradiation. In addition, knockdown of dMFN causes a striking loss of adult wing tissue and significant apoptosis in the developing wing discs. Our findings are consistent with a growing body of work describing a role for mitochondrial fission and fusion machinery in the decision of cells to die.
Resumo:
PURPOSE: A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion-weighted imaging. THEORY: Images with reduced artifacts are reconstructed with an iterative projection onto convex sets (POCS) procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. METHODS: The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved diffusion-weighted imaging data corresponding to different k-space trajectories and matrix condition numbers. RESULTS: Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. CONCLUSION: POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods.
Resumo:
The tomography problem is investigated when the available projections are restricted to a limited angular domain. It is shown that a previous algorithm proposed for extrapolating the data to the missing cone in Fourier space is unstable in the presence of noise because of the ill-posedness of the problem. A regularized algorithm is proposed, which converges to stable solutions. The efficiency of both algorithms is tested by means of numerical simulations. © 1983 Taylor and Francis Group, LLC.
Resumo:
info:eu-repo/semantics/published
Resumo:
info:eu-repo/semantics/published