762 resultados para Viterbi-based algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate observations of cloud microphysical properties are needed for evaluating and improving the representation of cloud processes in climate models and better estimate of the Earth radiative budget. However, large differences are found in current cloud products retrieved from ground-based remote sensing measurements using various retrieval algorithms. Understanding the differences is an important step to address uncertainties in the cloud retrievals. In this study, an in-depth analysis of nine existing ground-based cloud retrievals using ARM remote sensing measurements is carried out. We place emphasis on boundary layer overcast clouds and high level ice clouds, which are the focus of many current retrieval development efforts due to their radiative importance and relatively simple structure. Large systematic discrepancies in cloud microphysical properties are found in these two types of clouds among the nine cloud retrieval products, particularly for the cloud liquid and ice particle effective radius. Note that the differences among some retrieval products are even larger than the prescribed uncertainties reported by the retrieval algorithm developers. It is shown that most of these large differences have their roots in the retrieval theoretical bases, assumptions, as well as input and constraint parameters. This study suggests the need to further validate current retrieval theories and assumptions and even the development of new retrieval algorithms with more observations under different cloud regimes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This contribution introduces a new digital predistorter to compensate serious distortions caused by memory high power amplifiers (HPAs) which exhibit output saturation characteristics. The proposed design is based on direct learning using a data-driven B-spline Wiener system modeling approach. The nonlinear HPA with memory is first identified based on the B-spline neural network model using the Gauss-Newton algorithm, which incorporates the efficient De Boor algorithm with both B-spline curve and first derivative recursions. The estimated Wiener HPA model is then used to design the Hammerstein predistorter. In particular, the inverse of the amplitude distortion of the HPA's static nonlinearity can be calculated effectively using the Newton-Raphson formula based on the inverse of De Boor algorithm. A major advantage of this approach is that both the Wiener HPA identification and the Hammerstein predistorter inverse can be achieved very efficiently and accurately. Simulation results obtained are presented to demonstrate the effectiveness of this novel digital predistorter design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyze and study a pervasive computing system in a mining environment to track people based on RFID (radio frequency identification) technology. In first instance, we explain the RFID fundamentals and the LANDMARC (location identification based on dynamic active RFID calibration) algorithm, then we present the proposed algorithm combining LANDMARC and trilateration technique to collect the coordinates of the people inside the mine, next we generalize a pervasive computing system that can be implemented in mining, and finally we show the results and conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the fast development of the Internet, wireless communications and semiconductor devices, home networking has received significant attention. Consumer products can collect and transmit various types of data in the home environment. Typical consumer sensors are often equipped with tiny, irreplaceable batteries and it therefore of the utmost importance to design energy efficient algorithms to prolong the home network lifetime and reduce devices going to landfill. Sink mobility is an important technique to improve home network performance including energy consumption, lifetime and end-to-end delay. Also, it can largely mitigate the hot spots near the sink node. The selection of optimal moving trajectory for sink node(s) is an NP-hard problem jointly optimizing routing algorithms with the mobile sink moving strategy is a significant and challenging research issue. The influence of multiple static sink nodes on energy consumption under different scale networks is first studied and an Energy-efficient Multi-sink Clustering Algorithm (EMCA) is proposed and tested. Then, the influence of mobile sink velocity, position and number on network performance is studied and a Mobile-sink based Energy-efficient Clustering Algorithm (MECA) is proposed. Simulation results validate the performance of the proposed two algorithms which can be deployed in a consumer home network environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The planning of semi-autonomous vehicles in traffic scenarios is a relatively new problem that contributes towards the goal of making road travel by vehicles free of human drivers. An algorithm needs to ensure optimal real time planning of multiple vehicles (moving in either direction along a road), in the presence of a complex obstacle network. Unlike other approaches, here we assume that speed lanes are not present and that different lanes do not need to be maintained for inbound and outbound traffic. Our basic hypothesis is to carry forward the planning task to ensure that a sufficient distance is maintained by each vehicle from all other vehicles, obstacles and road boundaries. We present here a 4-layer planning algorithm that consists of road selection (for selecting the individual roads of traversal to reach the goal), pathway selection (a strategy to avoid and/or overtake obstacles, road diversions and other blockages), pathway distribution (to select the position of a vehicle at every instance of time in a pathway), and trajectory generation (for generating a curve, smooth enough, to allow for the maximum possible speed). Cooperation between vehicles is handled separately at the different levels, the aim being to maximize the separation between vehicles. Simulated results exhibit behaviours of smooth, efficient and safe driving of vehicles in multiple scenarios; along with typical vehicle behaviours including following and overtaking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present comprehensive ground-based and space-based in situ geosynchronous observations of a substorm expansion phase onset on 1 October 2005. The Double Star TC-2 and GOES-12 spacecraft were both located within the substorm current wedge during the substorm expansion phase onset, which occurred over the Canadian sector. We find that an onset of ULF waves in space was observed after onset on the ground by extending the AWESOME timing algorithm into space. Furthermore, a population of low-energy field-aligned electrons was detected by the TC-2 PEACE instrument contemporaneous with the ULF waves in space. These electrons appear to be associated with an enhancement of field-aligned Poynting flux into the ionosphere which is large enough to power visible auroral displays. The observations are most consistent with a near-Earth initiation of substorm expansion phase onset, such as the Near-Geosynchronous Onset (NGO) substorm scenario. A lack of data from further downtail, however, means other mechanisms cannot be ruled out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using a discrete wavelet transform with a Meyer wavelet basis, we present a new quantitative algorithm for determining the onset time of Pi1 and Pi2 ULF waves in the nightside ionosphere with ∼20- to 40-s resolution at substorm expansion phase onset. We validate the algorithm by comparing both the ULF wave onset time and location to the optical onset determined by the Imager for Magnetopause-to-Aurora Global Exploration (IMAGE)–Far Ultraviolet Imager (FUV) instrument. In each of the six events analyzed, five substorm onsets and one pseudobreakup, the ULF onset is observed prior to the global optical onset observed by IMAGE at a station closely conjugate to the optical onset. The observed ULF onset times expand both latitudinally and longitudinally away from an epicenter of ULF wave power in the ionosphere. We further discuss the utility of the algorithm for diagnosing pseudobreakups and the relationship of the ULF onset epicenter to the meridians of elements of the substorm current wedge. The importance of the technique for establishing the causal sequence of events at substorm onset, especially in support of the multisatellite Time History of Events and Macroscale Interactions During Substorms (THEMIS) mission, is also described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This contribution proposes a novel probability density function (PDF) estimation based over-sampling (PDFOS) approach for two-class imbalanced classification problems. The classical Parzen-window kernel function is adopted to estimate the PDF of the positive class. Then according to the estimated PDF, synthetic instances are generated as the additional training data. The essential concept is to re-balance the class distribution of the original imbalanced data set under the principle that synthetic data sample follows the same statistical properties. Based on the over-sampled training data, the radial basis function (RBF) classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier’s structure and the parameters of RBF kernels are determined using a particle swarm optimisation algorithm based on the criterion of minimising the leave-one-out misclassification rate. The effectiveness of the proposed PDFOS approach is demonstrated by the empirical study on several imbalanced data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new adaptive nonlinear equalizer relying on a radial basis function (RBF) model, which is designed based on the minimum bit error rate (MBER) criterion, in the system setting of the intersymbol interference channel plus a co-channel interference. Our proposed algorithm is referred to as the on-line mixture of Gaussians estimator aided MBER (OMG-MBER) equalizer. Specifically, a mixture of Gaussians based probability density function (PDF) estimator is used to model the PDF of the decision variable, for which a novel on-line PDF update algorithm is derived to track the incoming data. With the aid of this novel on-line mixture of Gaussians based sample-by-sample updated PDF estimator, our adaptive nonlinear equalizer is capable of updating its equalizer’s parameters sample by sample to aim directly at minimizing the RBF nonlinear equalizer’s achievable bit error rate (BER). The proposed OMG-MBER equalizer significantly outperforms the existing on-line nonlinear MBER equalizer, known as the least bit error rate equalizer, in terms of both the convergence speed and the achievable BER, as is confirmed in our simulation study

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reinforcing the Low Voltage (LV) distribution network will become essential to ensure it remains within its operating constraints as demand on the network increases. The deployment of energy storage in the distribution network provides an alternative to conventional reinforcement. This paper presents a control methodology for energy storage to reduce peak demand in a distribution network based on day-ahead demand forecasts and historical demand data. The control methodology pre-processes the forecast data prior to a planning phase to build in resilience to the inevitable errors between the forecasted and actual demand. The algorithm uses no real time adjustment so has an economical advantage over traditional storage control algorithms. Results show that peak demand on a single phase of a feeder can be reduced even when there are differences between the forecasted and the actual demand. In particular, results are presented that demonstrate when the algorithm is applied to a large number of single phase demand aggregations that it is possible to identify which of these aggregations are the most suitable candidates for the control methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current state of the art in the planning and coordination of autonomous vehicles is based upon the presence of speed lanes. In a traffic scenario where there is a large diversity between vehicles the removal of speed lanes can generate a significantly higher traffic bandwidth. Vehicle navigation in such unorganized traffic is considered. An evolutionary based trajectory planning technique has the advantages of making driving efficient and safe, however it also has to surpass the hurdle of computational cost. In this paper, we propose a real time genetic algorithm with Bezier curves for trajectory planning. The main contribution is the integration of vehicle following and overtaking behaviour for general traffic as heuristics for the coordination between vehicles. The resultant coordination strategy is fast and near-optimal. As the vehicles move, uncertainties may arise which are constantly adapted to, and may even lead to either the cancellation of an overtaking procedure or the initiation of one. Higher level planning is performed by Dijkstra's algorithm which indicates the route to be followed by the vehicle in a road network. Re-planning is carried out when a road blockage or obstacle is detected. Experimental results confirm the success of the algorithm subject to optimal high and low-level planning, re-planning and overtaking.