902 resultados para Dynamic search fireworks algorithm with covariance mutation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Familial frontotemporal lobar degeneration with transactive response (TAR) DNA-binding protein of 43 kDa (TDP-43) proteinopathy (FTLD-TDP) is most commonly caused by progranulin (GRN) gene mutation. To characterize cortical degeneration in these cases, changes in density of the pathology across the cortical laminae of the frontal and temporal lobe were studied in seven cases of FTLD-TDP with GRN mutation using quantitative analysis and polynomial curve fitting. In 50% of gyri studied, neuronal cytoplasmic inclusions (NCI) exhibited a peak of density in the upper cortical laminae. Most frequently, neuronal intranuclear inclusions (NII) and dystrophic neurites (DN) exhibited a density peak in lower and upper laminae, respectively, glial inclusions (GI) being distributed in low densities across all laminae. Abnormally enlarged neurons (EN) were distributed either in the lower laminae or were more uniformly distributed across the cortex. The distribution of all neurons present varied between cases and regions, but most commonly exhibited a bimodal distribution, density peaks occurring in upper and lower laminae. Vacuolation primarily affected the superficial laminae and density of glial cell nuclei increased with distance across the cortex from pia mater to white matter. The densities of the NCI, GI, NII, and DN were not spatially correlated. The laminar distribution of the pathology in GRN mutation cases was similar to previously reported sporadic cases of FTLD-TDP. Hence, pathological changes initiated by GRN mutation, and by other causes in sporadic cases, appear to follow a parallel course resulting in very similar patterns of cortical degeneration in FTLD-TDP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper develops a novel realized matrix-exponential stochastic volatility model of multivariate returns and realized covariances that incorporates asymmetry and long memory (hereafter the RMESV-ALM model). The matrix exponential transformation guarantees the positivedefiniteness of the dynamic covariance matrix. The contribution of the paper ties in with Robert Basmann’s seminal work in terms of the estimation of highly non-linear model specifications (“Causality tests and observationally equivalent representations of econometric models”, Journal of Econometrics, 1988, 39(1-2), 69–104), especially for developing tests for leverage and spillover effects in the covariance dynamics. Efficient importance sampling is used to maximize the likelihood function of RMESV-ALM, and the finite sample properties of the quasi-maximum likelihood estimator of the parameters are analysed. Using high frequency data for three US financial assets, the new model is estimated and evaluated. The forecasting performance of the new model is compared with a novel dynamic realized matrix-exponential conditional covariance model. The volatility and co-volatility spillovers are examined via the news impact curves and the impulse response functions from returns to volatility and co-volatility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence, higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes amongst the agents on solution quality are examined for two multiple-choice optimisation problems. It is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub-) fitness measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes for (sub-)fitness evaluation purposes are examined for two multiple-choice optimisation problems. It is shown that random partnering strategies perform best by providing better sampling and more diversity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes for (sub-)fitness evaluation purposes are examined for two multiple-choice optimisation problems. It is shown that random partnering strategies perform best by providing better sampling and more diversity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence, higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes amongst the agents on solution quality are examined for two multiple-choice optimisation problems. It is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub-) fitness measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The size of online image datasets is constantly increasing. Considering an image dataset with millions of images, image retrieval becomes a seemingly intractable problem for exhaustive similarity search algorithms. Hashing methods, which encodes high-dimensional descriptors into compact binary strings, have become very popular because of their high efficiency in search and storage capacity. In the first part, we propose a multimodal retrieval method based on latent feature models. The procedure consists of a nonparametric Bayesian framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for relevance feedback. In the second part, we focus on supervised hashing with kernels. We describe a flexible hashing procedure that treats binary codes and pairwise semantic similarity as latent and observed variables, respectively, in a probabilistic model based on Gaussian processes for binary classification. We present a scalable inference algorithm with the sparse pseudo-input Gaussian process (SPGP) model and distributed computing. In the last part, we define an incremental hashing strategy for dynamic databases where new images are added to the databases frequently. The method is based on a two-stage classification framework using binary and multi-class SVMs. The proposed method also enforces balance in binary codes by an imbalance penalty to obtain higher quality binary codes. We learn hash functions by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from an unseen class, we propose an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate that the incremental strategy is capable of efficiently updating hash functions to the same retrieval performance as hashing from scratch.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We estimate a dynamic model of mortgage default for a cohort of Colombian debtors between 1997 and 2004. We use the estimated model to study the effects on default of a class of policies that affected the evolution of mortgage balances in Colombia during the 1990's. We propose a framework for estimating dynamic behavioral models accounting for the presence of unobserved state variables that are correlated across individuals and across time periods. We extend the standard literature on the structural estimation of dynamic models by incorporating an unobserved common correlated shock that affects all individuals' static payoffs and the dynamic continuation payoffs associated with different decisions. Given a standard parametric specification the dynamic problem, we show that the aggregate shocks are identified from the variation in the observed aggregate behavior. The shocks and their transition are separately identified, provided there is enough cross-sectionavl ariation of the observeds tates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Latency can be defined as the sum of the arrival times at the customers. Minimum latency problems are specially relevant in applications related to humanitarian logistics. This thesis presents algorithms for solving a family of vehicle routing problems with minimum latency. First the latency location routing problem (LLRP) is considered. It consists of determining the subset of depots to be opened, and the routes that a set of homogeneous capacitated vehicles must perform in order to visit a set of customers such that the sum of the demands of the customers assigned to each vehicle does not exceed the capacity of the vehicle. For solving this problem three metaheuristic algorithms combining simulated annealing and variable neighborhood descent, and an iterated local search (ILS) algorithm, are proposed. Furthermore, the multi-depot cumulative capacitated vehicle routing problem (MDCCVRP) and the multi-depot k-traveling repairman problem (MDk-TRP) are solved with the proposed ILS algorithm. The MDCCVRP is a special case of the LLRP in which all the depots can be opened, and the MDk-TRP is a special case of the MDCCVRP in which the capacity constraints are relaxed. Finally, a LLRP with stochastic travel times is studied. A two-stage stochastic programming model and a variable neighborhood search algorithm are proposed for solving the problem. Furthermore a sampling method is developed for tackling instances with an infinite number of scenarios. Extensive computational experiments show that the proposed methods are effective for solving the problems under study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we deal with a generalized multi-period mean-variance portfolio selection problem with market parameters Subject to Markov random regime switchings. Problems of this kind have been recently considered in the literature for control over bankruptcy, for cases in which there are no jumps in market parameters (see [Zhu, S. S., Li, D., & Wang, S. Y. (2004). Risk control over bankruptcy in dynamic portfolio selection: A generalized mean variance formulation. IEEE Transactions on Automatic Control, 49, 447-457]). We present necessary and Sufficient conditions for obtaining an optimal control policy for this Markovian generalized multi-period meal-variance problem, based on a set of interconnected Riccati difference equations, and oil a set of other recursive equations. Some closed formulas are also derived for two special cases, extending some previous results in the literature. We apply the results to a numerical example with real data for Fisk control over bankruptcy Ill a dynamic portfolio selection problem with Markov jumps selection problem. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To examine the quality of diabetes care and prevention of cardiovascular disease (CVD) in Australian general practice patients with type 2 diabetes and to investigate its relationship with coronary heart disease absolute risk (CHDAR). Methods: A total of 3286 patient records were extracted from registers of patients with type 2 diabetes held by 16 divisions of general practice (250 practices) across Australia for the year 2002. CHDAR was estimated using the United Kingdom Prospective Diabetes Study algorithm with higher CHDAR set at a 10 year risk of >15%. Multivariate multilevel logistic regression investigated the association between CHDAR and diabetes care. Results: 47.9% of diabetic patient records had glycosylated haemoglobin (HbA1c) >7%, 87.6% had total cholesterol >= 4.0 mmol/l, and 73.8% had blood pressure (BP) >= 130/85 mm Hg. 57.6% of patients were at a higher CHDAR, 76.8% of whom were not on lipid modifying medication and 66.2% were not on antihypertensive medication. After adjusting for clustering at the general practice level and age, lipid modifying medication was negatively related to CHDAR (odds ratio (OR) 0.84) and total cholesterol. Antihypertensive medication was positively related to systolic BP but negatively related to CHDAR (OR 0.88). Referral to ophthalmologists/optometrists and attendance at other health professionals were not related to CHDAR. Conclusions: At the time of the study the diabetes and CVD preventive care in Australian general practice was suboptimal, even after a number of national initiatives. The Australian Pharmaceutical Benefits Scheme (PBS) guidelines need to be modified to improve CVD preventive care in patients with type 2 diabetes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Autosomal recessive polycystic kidney disease is a hereditary fibrocystic disease that involves the kidneys and the biliary tract. Mutations in the PKHD1 gene are responsible for typical forms of autosomal recessive polycystic kidney disease. We have generated a mouse model with targeted mutation of Pkbd1 by disrupting exon 4, resulting in a mutant transcript with deletion of 66 codons and expression at similar to 30% of wild-type levels. Pkhd1(del4/d3l4) mice develop intrahepatic bile duct proliferation with progressive cyst formation and associated periportal fibrosis. In addition, these mice exhibit extrahepatic manifestations, including pancreatic cysts, splenomegaly, and common bile duct dilation. The kidneys are unaffected both histologically and functionally. Fibrocystin is expressed in the apical membranes and cilia of bile ducts and distal nephron segments but is absent from the proximal tubule. This pattern is unchanged in orthologous models of autosomal dominant polycystic kidney disease due to mutation in Pkd1 or Pkd2. Mutant fibrocystin in Pkhd1(del4/d3l4) mice also retains this expression pattern. The hypomorphic Pkhd1(del4/d3l4) mouse model provides evidence that reduced functional levels of fibrocystin are sufficient for cystogenesis and fibrosis in the liver and pancreas, but not the kidney, and supports the hypothesis of species-dependent differences in susceptibility of tissues to Pkbdl mutations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An algorithm for explicit integration of structural dynamics problems with multiple time steps is proposed that averages accelerations to obtain subcycle states at a nodal interface between regions integrated with different time steps. With integer time step ratios, the resulting subcycle updates at the interface sum to give the same effect as a central difference update over a major cycle. The algorithm is shown to have good accuracy, and stability properties in linear elastic analysis similar to those of constant velocity subcycling algorithms. The implementation of a generalised form of the algorithm with non-integer time step ratios is presented. (C) 1997 by John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.