902 resultados para Dynamic search fireworks algorithm with covariance mutation
Resumo:
Neuronal intermediate filament inclusion disease (NIFID), a rare form of frontotemporal lobar degeneration (FTLD), is characterized neuropathologically by focal atrophy of the frontal and temporal lobes, neuronal loss, gliosis, and neuronal cytoplasmic inclusions (NCI) containing epitopes of ubiquitin and neuronal intermediate filament proteins. Recently, the 'fused in sarcoma' (FUS) protein (encoded by the FUS gene) has been shown to be a component of the inclusions of familial amyotrophic lateral sclerosis with FUS mutation, NIFID, basophilic inclusion body disease, and atypical FTLD with ubiquitin-immunoreactive inclusions (aFTLD-U). To further characterize FUS proteinopathy in NIFID, and to determine whether the pathology revealed by FUS immunohistochemistry (IHC) is more extensive than a-internexin, we have undertaken a quantitative assessment of ten clinically and neuropathologically well-characterized cases using FUS IHC. The densities of NCI were greatest in the dentate gyrus (DG) and in sectors CA1/2 of the hippocampus. Anti-FUS antibodies also labeled glial inclusions (GI), neuronal intranuclear inclusions (NII), and dystrophic neurites (DN). Vacuolation was extensive across upper and lower cortical layers. Significantly greater densities of abnormally enlarged neurons and glial cell nuclei were present in the lower compared with the upper cortical laminae. FUS IHC revealed significantly greater numbers of NCI in all brain regions especially the DG. Our data suggest: (1) significant densities of FUS-immunoreactive NCI in NIFID especially in the DG and CA1/2; (2) infrequent FUS-immunoreactive GI, NII, and DN; (3) widely distributed vacuolation across the cortex, and (4) significantly more NCI revealed by FUS than a-internexin IHC.
Resumo:
Magnetoencephalography (MEG), a non-invasive technique for characterizing brain electrical activity, is gaining popularity as a tool for assessing group-level differences between experimental conditions. One method for assessing task-condition effects involves beamforming, where a weighted sum of field measurements is used to tune activity on a voxel-by-voxel basis. However, this method has been shown to produce inhomogeneous smoothness differences as a function of signal-to-noise across a volumetric image, which can then produce false positives at the group level. Here we describe a novel method for group-level analysis with MEG beamformer images that utilizes the peak locations within each participant's volumetric image to assess group-level effects. We compared our peak-clustering algorithm with SnPM using simulated data. We found that our method was immune to artefactual group effects that can arise as a result of inhomogeneous smoothness differences across a volumetric image. We also used our peak-clustering algorithm on experimental data and found that regions were identified that corresponded with task-related regions identified in the literature. These findings suggest that our technique is a robust method for group-level analysis with MEG beamformer images.
Resumo:
Throughput plays a vital role for data transfer in Vehicular Networks which is useful for both safety and non-safety applications. An algorithm that adapts to mobile environment by using Context information has been proposed in this paper. Since one of the problems of existing rate adaptation algorithm is underutilization of link capacity in Vehicular environments, we have demonstrated that in wireless and mobile environments, vehicles can adapt to high mobility link condition and still perform better due to regular vehicles that will be out of communication range due to range checking and then de-congest the network thereby making the system perform better since fewer vehicles will contend for network resources. In this paper, we have design, implement and analyze ACARS, a more robust algorithm with significant increase in throughput performance and energy efficiency in the mist of high mobility of vehicles.
Resumo:
Background Introduction of proposed criteria for DSM-5 Autism Spectrum Disorder (ASD) has raised concerns that some individuals currently meeting diagnostic criteria for Pervasive Developmental Disorder (PDD; DSM-IV-TR/ICD-10) will not qualify for a diagnosis under the proposed changes. To date, reports of sensitivity and specificity of the new criteria have been inconsistent across studies. No study has yet considered how changes at the 'sub domain' level might affect overall sensitivity and specificity, and few have included individuals of different ages and ability levels. Methods A set of DSM-5 ASD algorithms were developed using items from the Diagnostic Interview for Social and Communication Disorders (DISCO). The number of items required for each DSM-5 subdomain was defined either according to criteria specified by DSM-5 (Initial Algorithm), a statistical approach (Youden J Algorithm), or to minimise the number of false positives while maximising sensitivity (Modified Algorithm). The algorithms were designed, tested and compared in two independent samples (Sample 1, N = 82; Sample 2, N = 115), while sensitivity was assessed across age and ability levels in an additional dataset of individuals with an ICD-10 PDD diagnosis (Sample 3, N = 190). Results Sensitivity was highest in the Initial Algorithm, which had the poorest specificity. Although Youden J had excellent specificity, sensitivity was significantly lower than in the Modified Algorithm, which had both good sensitivity and specificity. Relaxing the domain A rules improved sensitivity of the Youden J Algorithm, but it remained less sensitive than the Modified Algorithm. Moreover, this was the only algorithm with variable sensitivity across age. All versions of the algorithm performed well across ability level. Conclusions This study demonstrates that good levels of both sensitivity and specificity can be achieved for a diagnostic algorithm adhering to the DSM-5 criteria that is suitable across age and ability level. © 2013 The Authors. Journal of Child Psychology and Psychiatry © 2013 Association for Child and Adolescent Mental Health.
Resumo:
In this paper, we present one approach for extending the learning set of a classification algorithm with additional metadata. It is used as a base for giving appropriate names to found regularities. The analysis of correspondence between connections established in the attribute space and existing links between concepts can be used as a test for creation of an adequate model of the observed world. Meta-PGN classifier is suggested as a possible tool for establishing these connections. Applying this approach in the field of content-based image retrieval of art paintings provides a tool for extracting specific feature combinations, which represent different sides of artists' styles, periods and movements.
Resumo:
A rough set approach for attribute reduction is an important research subject in data mining and machine learning. However, most attribute reduction methods are performed on a complete decision system table. In this paper, we propose methods for attribute reduction in static incomplete decision systems and dynamic incomplete decision systems with dynamically-increasing and decreasing conditional attributes. Our methods use generalized discernibility matrix and function in tolerance-based rough sets.
Resumo:
Van egy szó, ami egyre fontosabb lesz a társadalom és a vállalatok számára is, ez a szó a közösség. A közösséghez tevékenységek tartoznak, és ezen a ponton kapcsolódik be a vállalat. A vállalkozások az elmúlt években a közösségi igényeket a CRM-(Customer Relationship Management) megoldásokkal szolgálták ki. Informatikailag a közösségi hálózatok, már nemcsak vállalkozási folyamatot, hanem ehhez kapcsoltan az emberek társadalmi igényét is megpróbálják lefedni az elektronika lehetőségeivel. Egyre inkább a közösségi vállalkozások korát éljük, melyben a folyamathoz tartozó közösségek megosztják, egymás rendelkezésére bocsátják az információkat. A korábbi klasszikus CRM-rendszerek csak begyűjtötték az információkat, ezzel ellenben a közösségi CRM-rendszerek kétirányú kommunikációt folytatnak, párbeszédet kezdeményeznek az ügyfelekkel, buzdítják őket, hogy mondják el a véleményüket. Vajon ez az új stratégia,egy teljesen új világot hoz el a vállalatok számára, vagy csak a CRM fejlődésének egy újabb fokát jelenti? A szerzők erre a kérdésre keresik a választ gyakorlati esetek és szakirodalmi publikációk feldolgozásával. ______ There is a word that begins to be more and more important for the society and the companies, and this word is community. We can talk about social networks, people seek the social demand they already had as a part of their lives for a long time, and this means that it appears in the electronic society as an essential need too. The community is not enough, activities are also needed and this is the point where the companies link in, who promote their goods and facilities to the outside world and with this they use the next stage of customer relationship management, the fulfilment of social needs. We live in the age of social shopping, communities are everywhere and everyone shares information, and up to the present classic CR M systems ran from static databases. On the contrary social CR M systems perform a two-way communication, start a conversation with customers and encourage them to tell their opinions, which always changes on social media, so they build a dynamic database and communicate with customers through response-reactions. Does this new strategy bring a whole new world to companies or is it only another step in the development and another channel of CRM?
Resumo:
This dissertation establishes a novel system for human face learning and recognition based on incremental multilinear Principal Component Analysis (PCA). Most of the existing face recognition systems need training data during the learning process. The system as proposed in this dissertation utilizes an unsupervised or weakly supervised learning approach, in which the learning phase requires a minimal amount of training data. It also overcomes the inability of traditional systems to adapt to the testing phase as the decision process for the newly acquired images continues to rely on that same old training data set. Consequently when a new training set is to be used, the traditional approach will require that the entire eigensystem will have to be generated again. However, as a means to speed up this computational process, the proposed method uses the eigensystem generated from the old training set together with the new images to generate more effectively the new eigensystem in a so-called incremental learning process. In the empirical evaluation phase, there are two key factors that are essential in evaluating the performance of the proposed method: (1) recognition accuracy and (2) computational complexity. In order to establish the most suitable algorithm for this research, a comparative analysis of the best performing methods has been carried out first. The results of the comparative analysis advocated for the initial utilization of the multilinear PCA in our research. As for the consideration of the issue of computational complexity for the subspace update procedure, a novel incremental algorithm, which combines the traditional sequential Karhunen-Loeve (SKL) algorithm with the newly developed incremental modified fast PCA algorithm, was established. In order to utilize the multilinear PCA in the incremental process, a new unfolding method was developed to affix the newly added data at the end of the previous data. The results of the incremental process based on these two methods were obtained to bear out these new theoretical improvements. Some object tracking results using video images are also provided as another challenging task to prove the soundness of this incremental multilinear learning method.
Resumo:
In many species, particular individuals consistently lead group travel. While benefits to followers often are relatively obvious, including access to resources, benefits to leaders are often less obvious. This is especially true for species that feed on patchy mobile resources where all group members may locate prey simultaneously and food intake likely decreases with increasing group size. Leaders in highly complex habitats, however, could provide access to foraging resources for less informed relatives, thereby gaining indirect benefits by helping kin. Recently, leadership has been documented in a population of bottlenose dolphins (Tursiops truncatus) where direct benefits to leaders appear unlikely. To test whether leaders could benefit indirectly we examined relatedness between leader-follower pairs and compared these levels to pairs who associated but did not have leader-follower relationship (neither ever led the other). We found the average relatedness value for leader-follower pairs was greater than expected based on chance. The same was not found when examining non leader-follower pairs. Additionally, relatedness for leader-follower pairs was positively correlated with association index values, but no correlation was found for this measure in non leader-follower pairs. Interestingly, haplotypes were not frequently shared between leader-follower pairs (25%). Together, these results suggest that bottlenose dolphin leaders have the opportunity to gain indirect benefits by leading relatives. These findings provide a potential mechanism for the maintenance of leadership in a highly dynamic fission-fusion population with few obvious direct benefits to leaders.
Resumo:
The aim of this work is to present a methodology to develop cost-effective thermal management solutions for microelectronic devices, capable of removing maximum amount of heat and delivering maximally uniform temperature distributions. The topological and geometrical characteristics of multiple-story three-dimensional branching networks of microchannels were developed using multi-objective optimization. A conjugate heat transfer analysis software package and an automatic 3D microchannel network generator were developed and coupled with a modified version of a particle-swarm optimization algorithm with a goal of creating a design tool for 3D networks of optimized coolant flow passages. Numerical algorithms in the conjugate heat transfer solution package include a quasi-ID thermo-fluid solver and a steady heat diffusion solver, which were validated against results from high-fidelity Navier-Stokes equations solver and analytical solutions for basic fluid dynamics test cases. Pareto-optimal solutions demonstrate that thermal loads of up to 500 W/cm2 can be managed with 3D microchannel networks, with pumping power requirements up to 50% lower with respect to currently used high-performance cooling technologies.
Resumo:
Launching centers are designed for scientific and commercial activities with aerospace vehicles. Rockets Tracking Systems (RTS) are part of the infrastructure of these centers and they are responsible for collecting and processing the data trajectory of vehicles. Generally, Parabolic Reflector Radars (PRRs) are used in RTS. However, it is possible to use radars with antenna arrays, or Phased Arrays (PAs), so called Phased Arrays Radars (PARs). Thus, the excitation signal of each radiating element of the array can be adjusted to perform electronic control of the radiation pattern in order to improve functionality and maintenance of the system. Therefore, in the implementation and reuse projects of PARs, modeling is subject to various combinations of excitation signals, producing a complex optimization problem due to the large number of available solutions. In this case, it is possible to use offline optimization methods, such as Genetic Algorithms (GAs), to calculate the problem solutions, which are stored for online applications. Hence, the Genetic Algorithm with Maximum-Minimum Crossover (GAMMC) optimization method was used to develop the GAMMC-P algorithm that optimizes the modeling step of radiation pattern control from planar PAs. Compared with a conventional crossover GA, the GAMMC has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, the GAMMC prevents premature convergence, increases population fitness and reduces the processing time. Therefore, the GAMMC-P uses a reconfigurable algorithm with multiple objectives, different coding and genetic operator MMC. The test results show that GAMMC-P reached the proposed requirements for different operating conditions of a planar RAV.
Resumo:
This thesis stems from the project with real-time environmental monitoring company EMSAT Corporation. They were looking for methods to automatically ag spikes and other anomalies in their environmental sensor data streams. The problem presents several challenges: near real-time anomaly detection, absence of labeled data and time-changing data streams. Here, we address this problem using both a statistical parametric approach as well as a non-parametric approach like Kernel Density Estimation (KDE). The main contribution of this thesis is extending the KDE to work more effectively for evolving data streams, particularly in presence of concept drift. To address that, we have developed a framework for integrating Adaptive Windowing (ADWIN) change detection algorithm with KDE. We have tested this approach on several real world data sets and received positive feedback from our industry collaborator. Some results appearing in this thesis have been presented at ECML PKDD 2015 Doctoral Consortium.
Resumo:
Marine spatial planning and ecological research call for high-resolution species distribution data. However, those data are still not available for most marine large vertebrates. The dynamic nature of oceanographic processes and the wide-ranging behavior of many marine vertebrates create further difficulties, as distribution data must incorporate both the spatial and temporal dimensions. Cetaceans play an essential role in structuring and maintaining marine ecosystems and face increasing threats from human activities. The Azores holds a high diversity of cetaceans but the information about spatial and temporal patterns of distribution for this marine megafauna group in the region is still very limited. To tackle this issue, we created monthly predictive cetacean distribution maps for spring and summer months, using data collected by the Azores Fisheries Observer Programme between 2004 and 2009. We then combined the individual predictive maps to obtain species richness maps for the same period. Our results reflect a great heterogeneity in distribution among species and within species among different months. This heterogeneity reflects a contrasting influence of oceanographic processes on the distribution of cetacean species. However, some persistent areas of increased species richness could also be identified from our results. We argue that policies aimed at effectively protecting cetaceans and their habitats must include the principle of dynamic ocean management coupled with other area-based management such as marine spatial planning.
Resumo:
Purpose: To investigate the effect of incorporating a beam spreading parameter in a beam angle optimization algorithm and to evaluate its efficacy for creating coplanar IMRT lung plans in conjunction with machine learning generated dose objectives.
Methods: Fifteen anonymized patient cases were each re-planned with ten values over the range of the beam spreading parameter, k, and analyzed with a Wilcoxon signed-rank test to determine whether any particular value resulted in significant improvement over the initially treated plan created by a trained dosimetrist. Dose constraints were generated by a machine learning algorithm and kept constant for each case across all k values. Parameters investigated for potential improvement included mean lung dose, V20 lung, V40 heart, 80% conformity index, and 90% conformity index.
Results: With a confidence level of 5%, treatment plans created with this method resulted in significantly better conformity indices. Dose coverage to the PTV was improved by an average of 12% over the initial plans. At the same time, these treatment plans showed no significant difference in mean lung dose, V20 lung, or V40 heart when compared to the initial plans; however, it should be noted that these results could be influenced by the small sample size of patient cases.
Conclusions: The beam angle optimization algorithm, with the inclusion of the beam spreading parameter k, increases the dose conformity of the automatically generated treatment plans over that of the initial plans without adversely affecting the dose to organs at risk. This parameter can be varied according to physician preference in order to control the tradeoff between dose conformity and OAR sparing without compromising the integrity of the plan.
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.