48 resultados para Branch and bound algorithms


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Searching for the optimum tap-length that best balances the complexity and steady-state performance of an adaptive filter has attracted attention recently. Among existing algorithms that can be found in the literature, two of which, namely the segmented filter (SF) and gradient descent (GD) algorithms, are of particular interest as they can search for the optimum tap-length quickly. In this paper, at first, we carefully compare the SF and GD algorithms and show that the two algorithms are equivalent in performance under some constraints, but each has advantages/disadvantages relative to the other. Then, we propose an improved variable tap-length algorithm using the concept of the pseudo fractional tap-length (FT). Updating the tap-length with instantaneous errors in a style similar to that used in the stochastic gradient [or least mean squares (LMS)] algorithm, the proposed FT algorithm not only retains the advantages from both the SF and the GD algorithms but also has significantly less complexity than existing algorithms. Both performance analysis and numerical simulations are given to verify the new proposed algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deep Brain Stimulation (DBS) is a treatment routinely used to alleviate the symptoms of Parkinson's disease (PD). In this type of treatment, electrical pulses are applied through electrodes implanted into the basal ganglia of the patient. As the symptoms are not permanent in most patients, it is desirable to develop an on-demand stimulator, applying pulses only when onset of the symptoms is detected. This study evaluates a feature set created for the detection of tremor - a cardinal symptom of PD. The designed feature set was based on standard signal features and researched properties of the electrical signals recorded from subthalamic nucleus (STN) within the basal ganglia, which together included temporal, spectral, statistical, autocorrelation and fractal properties. The most characterized tremor related features were selected using statistical testing and backward algorithms then used for classification on unseen patient signals. The spectral features were among the most efficient at detecting tremor, notably spectral bands 3.5-5.5 Hz and 0-1 Hz proved to be highly significant. The classification results for determination of tremor achieved 94% sensitivity with specificity equaling one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The general stability theory of nonlinear receding horizon controllers has attracted much attention over the last fifteen years, and many algorithms have been proposed to ensure closed-loop stability. On the other hand many reports exist regarding the use of artificial neural network models in nonlinear receding horizon control. However, little attention has been given to the stability issue of these specific controllers. This paper addresses this problem and proposes to cast the nonlinear receding horizon control based on neural network models within the framework of an existing stabilising algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual Telepresence system which utilize virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the display is fixed and is most suitable only for viewing elements of a scene at a particular distance. In such a system, the operator's ability to gaze around without use of head movement is severely limited. A trade off must be made between a poor viewing resolution or a narrow width of viewing field. To address these limitations a prototype system where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator has been developed. This paper explores the reasons why is necessary to actively adjust both the display system and the cameras and furthermore justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms, An assessment of the performance of the system against a fixed camera/display system when operators are assigned basic tasks involving depth and distance/size perception. The sensitivity to variations in transient performance of the display and camera vergence is also assessed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adaptive methods which “equidistribute” a given positive weight function are now used fairly widely for selecting discrete meshes. The disadvantage of such schemes is that the resulting mesh may not be smoothly varying. In this paper a technique is developed for equidistributing a function subject to constraints on the ratios of adjacent steps in the mesh. Given a weight function $f \geqq 0$ on an interval $[a,b]$ and constants $c$ and $K$, the method produces a mesh with points $x_0 = a,x_{j + 1} = x_j + h_j ,j = 0,1, \cdots ,n - 1$ and $x_n = b$ such that\[ \int_{xj}^{x_{j + 1} } {f \leqq c\quad {\text{and}}\quad \frac{1} {K}} \leqq \frac{{h_{j + 1} }} {{h_j }} \leqq K\quad {\text{for}}\, j = 0,1, \cdots ,n - 1 . \] A theoretical analysis of the procedure is presented, and numerical algorithms for implementing the method are given. Examples show that the procedure is effective in practice. Other types of constraints on equidistributing meshes are also discussed. The principal application of the procedure is to the solution of boundary value problems, where the weight function is generally some error indicator, and accuracy and convergence properties may depend on the smoothness of the mesh. Other practical applications include the regrading of statistical data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a world where data is captured on a large scale the major challenge for data mining algorithms is to be able to scale up to large datasets. There are two main approaches to inducing classification rules, one is the divide and conquer approach, also known as the top down induction of decision trees; the other approach is called the separate and conquer approach. A considerable amount of work has been done on scaling up the divide and conquer approach. However, very little work has been conducted on scaling up the separate and conquer approach.In this work we describe a parallel framework that allows the parallelisation of a certain family of separate and conquer algorithms, the Prism family. Parallelisation helps the Prism family of algorithms to harvest additional computer resources in a network of computers in order to make the induction of classification rules scale better on large datasets. Our framework also incorporates a pre-pruning facility for parallel Prism algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimal estimation (OE) improves sea surface temperature (SST) estimated from satellite infrared imagery in the “split-window”, in comparison to SST retrieved using the usual multi-channel (MCSST) or non-linear (NLSST) estimators. This is demonstrated using three months of observations of the Advanced Very High Resolution Radiometer (AVHRR) on the first Meteorological Operational satellite (Metop-A), matched in time and space to drifter SSTs collected on the global telecommunications system. There are 32,175 matches. The prior for the OE is forecast atmospheric fields from the Météo-France global numerical weather prediction system (ARPEGE), the forward model is RTTOV8.7, and a reduced state vector comprising SST and total column water vapour (TCWV) is used. Operational NLSST coefficients give mean and standard deviation (SD) of the difference between satellite and drifter SSTs of 0.00 and 0.72 K. The “best possible” NLSST and MCSST coefficients, empirically regressed on the data themselves, give zero mean difference and SDs of 0.66 K and 0.73 K respectively. Significant contributions to the global SD arise from regional systematic errors (biases) of several tenths of kelvin in the NLSST. With no bias corrections to either prior fields or forward model, the SSTs retrieved by OE minus drifter SSTs have mean and SD of − 0.16 and 0.49 K respectively. The reduction in SD below the “best possible” regression results shows that OE deals with structural limitations of the NLSST and MCSST algorithms. Using simple empirical bias corrections to improve the OE, retrieved minus drifter SSTs are obtained with mean and SD of − 0.06 and 0.44 K respectively. Regional biases are greatly reduced, such that the absolute bias is less than 0.1 K in 61% of 10°-latitude by 30°-longitude cells. OE also allows a statistic of the agreement between modelled and measured brightness temperatures to be calculated. We show that this measure is more efficient than the current system of confidence levels at identifying reliable retrievals, and that the best 75% of satellite SSTs by this measure have negligible bias and retrieval error of order 0.25 K.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several continuous observational datasets of Artic sea-ice concentration are currently available that cover the period since the advent of routine satellite observations. We report on a comparison of three sea-ice concentration datasets. These are the National Ice Center charts, and two passive microwave radiometer datasets derived using different approaches: the NASA team and Bootstrap algorithms. Empirical orthogonal function (EOF) analyses were employed to compare modes of variability and their consistency between the datasets. The analysis was motivated by the need for a reliable, realistic sea ice climatology for use in climate model simulations, for which both the variability and absolute values of extent and concentration are important. We found that, while there are significant discrepancies in absolute concentrations, the major modes of variability derived from all records were essentially the same.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The warm conveyor belt (WCB) of an extratropical cyclone generally splits into two branches. One branch (WCB1) turns anticyclonically into the downstream upper-level tropospheric ridge, while the second branch (WCB2) wraps cyclonically around the cyclone centre. Here, the WCB split in a typical North Atlantic cold-season cyclone is analysed using two numerical models: the Met Office Unified Model and the COSMO model. The WCB flow is defined using off-line trajectory analysis. The two models represent the WCB split consistently. The split occurs early in the evolution of the WCB with WCB1 experiencing maximum ascent at lower latitudes and with higher moisture content than WCB2. WCB1 ascends abruptly along the cold front where the resolved ascent rates are greatest and there is also line convection. In contrast, WCB2 remains at lower levels for longer before undergoing saturated large-scale ascent over the system's warm front. The greater moisture in WCB1 inflow results in greater net potential temperature change from latent heat release, which determines the final isentropic level of each branch. WCB1 also exhibits lower outflow potential vorticity values than WCB2. Complementary diagnostics in the two models are utilised to study the influence of individual diabatic processes on the WCB. Total diabatic heating rates along the WCB branches are comparable in the two models with microphysical processes in the large-scale cloud schemes being the major contributor to this heating. However, the different convective parameterisation schemes used by the models cause significantly different contributions to the total heating. These results have implications for studies on the influence of the WCB outflow in Rossby wave evolution and breaking. Key aspects are the net potential temperature change and the isentropic level of the outflow which together will influence the relative mass going into each WCB branch and the associated negative PV anomalies at the tropopause-level flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Often, firms have no information on the specification of the true demand model they are faced with. It is, however, a well established fact that trial-and-error algorithms may be used by them in order to learn how to make optimal decisions. Using experimental methods, we identify a property of the information on past actions which helps the seller of two asymmetric demand substitutes to reach the optimal prices more precisely and faster. The property concerns the possibility of disaggregating changes in each product’s demand into client exit/entry and shift from one product to the other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Bollène-2002 Experiment was aimed at developing the use of a radar volume-scanning strategy for conducting radar rainfall estimations in the mountainous regions of France. A developmental radar processing system, called Traitements Régionalisés et Adaptatifs de Données Radar pour l’Hydrologie (Regionalized and Adaptive Radar Data Processing for Hydrological Applications), has been built and several algorithms were specifically produced as part of this project. These algorithms include 1) a clutter identification technique based on the pulse-to-pulse variability of reflectivity Z for noncoherent radar, 2) a coupled procedure for determining a rain partition between convective and widespread rainfall R and the associated normalized vertical profiles of reflectivity, and 3) a method for calculating reflectivity at ground level from reflectivities measured aloft. Several radar processing strategies, including nonadaptive, time-adaptive, and space–time-adaptive variants, have been implemented to assess the performance of these new algorithms. Reference rainfall data were derived from a careful analysis of rain gauge datasets furnished by the Cévennes–Vivarais Mediterranean Hydrometeorological Observatory. The assessment criteria for five intense and long-lasting Mediterranean rain events have proven that good quantitative precipitation estimates can be obtained from radar data alone within 100-km range by using well-sited, well-maintained radar systems and sophisticated, physically based data-processing systems. The basic requirements entail performing accurate electronic calibration and stability verification, determining the radar detection domain, achieving efficient clutter elimination, and capturing the vertical structure(s) of reflectivity for the target event. Radar performance was shown to depend on type of rainfall, with better results obtained with deep convective rain systems (Nash coefficients of roughly 0.90 for point radar–rain gauge comparisons at the event time step), as opposed to shallow convective and frontal rain systems (Nash coefficients in the 0.6–0.8 range). In comparison with time-adaptive strategies, the space–time-adaptive strategy yields a very significant reduction in the radar–rain gauge bias while the level of scatter remains basically unchanged. Because the Z–R relationships have not been optimized in this study, results are attributed to an improved processing of spatial variations in the vertical profile of reflectivity. The two main recommendations for future work consist of adapting the rain separation method for radar network operations and documenting Z–R relationships conditional on rainfall type.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Myostatin plays a fundamental role in regulating the size of skeletal muscles. To date, only a single myostatin gene and no splice variants have been identified in mammals. Here we describe the splicing of a cryptic intron that removes the coding sequence for the receptor binding moiety of sheep myostatin. The deduced polypeptide sequence of the myostatin splice variant (MSV) contains a 256 amino acid N-terminal domain, which is common to myostatin, and a unique C-terminus of 65 amino acids. Western immunoblotting demonstrated that MSV mRNA is translated into protein, which is present in skeletal muscles. To determine the biological role of MSV, we developed an MSV over-expressing C2C12 myoblast line and showed that it proliferated faster than that of the control line in association with an increased abundance of the CDK2/Cyclin E complex in the nucleus. Recombinant protein made for the novel C-terminus of MSV also stimulated myoblast proliferation and bound to myostatin with high affinity as determined by surface plasmon resonance assay. Therefore, we postulated that MSV functions as a binding protein and antagonist of myostatin. Consistent with our postulate, myostatin protein was co-immunoprecipitated from skeletal muscle extracts with an MSV-specific antibody. MSV over-expression in C2C12 myoblasts blocked myostatin-induced Smad2/3-dependent signaling, thereby confirming that MSV antagonizes the canonical myostatin pathway. Furthermore, MSV over expression increased the abundance of MyoD, Myogenin and MRF4 proteins (P,0.05), which indicates that MSV stimulates myogenesis through the induction of myogenic regulatory factors. To help elucidate a possible role in vivo, we observed that MSV protein was more abundant during early post-natal muscle development, while myostatin remained unchanged, which suggests that MSV may promote the growth of skeletal muscles. We conclude that MSV represents a unique example of intra-genic regulation in which a splice variant directly antagonizes the biological activity of the canonical gene product.