994 resultados para deterministic fractals


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Operations Research (OR) community have defined many deterministic manufacturing control problems mainly focused on scheduling. Well-defined benchmark problems provide a mechanism for communication of the effectiveness of different optimization algorithms. Manufacturing problems within industry are stochastic and complex. Common features of these problems include: variable demand, machine part specific breakdown patterns, part machine specific process durations, continuous production, Finished Goods Inventory (FGI) buffers, bottleneck machines and limited production capacity. Discrete Event Simulation (DES) is a commonly used tool for studying manufacturing systems of realistic complexity. There are few reports of detail-rich benchmark problems for use within the simulation optimization community that are as complex as those faced by production managers. This work details an algorithm that can be used to create single and multistage production control problems. The reported software implementation of the algorithm generates text files in eXtensible Markup Language (XML) format that are easily edited and understood as well as being cross-platform compatible. The distribution and acceptance of benchmark problems generated with the algorithm would enable researchers working on simulation and optimization of manufacturing problems to effectively communicate results to benefit the field in general.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We examine efficient computer implementation of one method of deterministic global optimisation, the cutting angle method. In this method the objective function is approximated from values below the function with a piecewise linear auxiliary function. The global minimum of the objective function is approximated from the sequence of minima of this auxiliary function. Computing the minima of the auxiliary function is a combinatorial problem, and we show that it can be effectively parallelised. We discuss the improvements made to the serial implementation of the cutting angle method, and ways of distributing computations across multiple processors on parallel and cluster computers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many problems in chemistry depend on the ability to identify the global minimum or maximum of a function. Examples include applications in chemometrics, optimization of reaction or operating conditions, and non-linear least-squares analysis. This paper presents the results of the application of a new method of deterministic global optimization, called the cutting angle method (CAM), as applied to the prediction of molecular geometries. CAM is shown to be competitive with other global optimization techniques for several benchmark molecular conformation problem. CAM is a general method that can also be applied to other computational problems involving global minima, global maxima or finding the roots of nonlinear equations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides mobility estimation and prediction for a variant of GSM network which resembles an adhoc wireless mobile network where base stations and users are both mobile. We propose using Robust Extended Kalman Filter (REKF)as a location heading altitude estimator of mobile user for next node (mobile-base station)in order to improve the connection reliability and bandwidth efficiency of the underlying system. Through analysis we demonstrate that our algorithm can successfully track the mobile users with less system complexity as it requires either one or two closest mobile-basestation measurements. Further, the technique is robust against system uncertainties due to inherent deterministic nature in the mobility model. Through simulation, we show the accuracy and simplicity in implementation of our prediction algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Provisioning of real-time multimedia sessions over wireless cellular network poses unique challenges due to frequent handoff and rerouting of a connection. For this reason, the wireless networks with cellular architecture require efficient user mobility estimation and prediction. This paper proposes using robust extended Kalman filter (REKF) as a location heading altitude estimator of mobile user for next cell prediction in order to improve the connection reliability and bandwidth efficiency of the underlying system. Through analysis we demonstrate that our algorithm reduces the system complexity (compared to existing approach using pattern matching and Kalman filter) as it requires only two base station measurements or only the measurement from the closest base station. Further, the technique is robust against system uncertainties due to inherent deterministic nature in the mobility model and more effective in comparison with the standard Kalman filter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

IP spoofing is a technique used to gain unauthorized access to computers, whereby the intruder sends messages to a computer with an IP address indicating that the message is coming from a trusted host. It causes serious security problem in the cyber world, and is currently exploited widely in the information warfare. This paper at first introduces the IP spoofing attack through examples, technical issues and attacking types. Later its countermeasures are analysed in detail, which include authentication and encription, filtering and IP traceback. In particular, an IP traceback mechanism, Flexible Deterministic Packet Marking (FDPM) is presented. Since the IP spoofing problem can not be solved only by technology, but it also needs social regulation, the legal issues and economic impact are discussed in the later part.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we describe SpeedNet, a GSM network variant which resembles an ad hoc wireless mobile network where base stations (possibly other vehicles in the network) keep track of the velocities of mobile users (cars). SpeedNet is intended to track mobile users and their speed passively for both speed policing and control of traffic. The speed of the vehicle is controlled in a speed critical zone by means of an electro-mechanical control system, suitably referred to as VVLS (vehicular velocity limiting system). VVLS is mounted in the vehicle and responds to the command signals generated by the base station. It also determines the next basestation to handoff, in order to improve the connection reliability and bandwidth efficiency of the underlying network. Robust extended Kalman filter (REKF) is used as a passive velocity estimator of the mobile user with the widely used proportional and integral controller speed control. We demonstrate through simulation and analysis that our prediction algorithm can successfully estimate the mobile users velocity with low system complexity as it requires two closet mobile-base station measurement and also it is robust against system uncertainties due to the inherent deterministic nature in the mobility model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper applies sensor fusion to the localization problem of a mobile user. We propose that the use of direction of arrival (DOA) estimations along with received signal strength measurements can increase the accuracy and robustness of location estimations. The DOA estimations are incapable of providing multi-dimensional positioning alone, while signal strength methods are prone to high uncertainties. A Robust Extended Kalman Filter (REKF) is used to derive the state estimate of the mobile user's position, and successfully track the mobile users with less system complexity, as it requires measurements from only one base station. Therefore, localization of mobile users can be performed at the single base station. Furthermore, the technique is robust against system uncertainties caused by the inherent deterministic nature of the mobility model. Through simulation, we show the accuracy of our prediction algorithm and the simplicity of its implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides location estimation based power control strategy for cellular radio systems via a location based interference management scheme. Our approach considers the carrier-to-interference as dependent on the transmitter and receiver separation distance and therefore an accurate estimation of the precise locations can provide the power critical mobile user to control the transition power accordingly. In this fully
distributed algorithms, we propose using a Robust Extended Kalman Filter (REKF) to derive an estimate of the mobile user’s closest mobile base station from the user’s location, heading and altitude. Our analysis demonstrates that this algorithm can successfully track the mobile users with less system complexity, as it requires measurements from only one or two closest mobile base stations and hence enable the user to transmit at the rate that is sufficient for the interference management. Our power control
algorithms based on this estimation converges to the desired power trajectory. Further, the technique is robust against system uncertainties caused by the inherent deterministic nature of the mobility model. Through simulation, we show the accuracy of our prediction algorithm and the simplicity of its implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a new approach, called Flexible Deterministic Packet Marking (FDPM), to perform a large-scale IP traceback to defend against Distributed Denial of Service (DDoS) attacks. In a DDoS attack the victim host or network is usually attacked by a large number of spoofed IP packets coming from multiple sources. IP traceback is the ability to trace the IP packets to their sources without relying on the source address field of the IP header. FDPM provides many flexible features to trace the IP packets and can obtain better tracing capability than current IP traceback mechanisms, such as Probabilistic Packet Marking (PPM), and Deterministic Packet Marking (DPM). The flexibilities of FDPM are in two ways, one is that it can adjust the length of marking field according to the network protocols deployed; the other is that it can adjust the marking rate according to the load of participating routers. The implementation and evaluation demonstrates that the FDPM needs moderately only a small number of packets to complete the traceback process; and can successfully perform a large-scale IP traceback, for example, trace up to 110,000 sources in a single incident response. It has a built-in overload prevention mechanism, therefore this scheme can perform a good traceback process even it is heavily loaded.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently Distributed Denial of Service (DDoS) attacks have been identified as one of the most serious problems on the Internet. The aim of DDoS attacks is to prevent legitimate users from accessing desired resources, such as network bandwidth. Hence the immediate task of DDoS defense is to provide as much resources as possible to legitimate users when there is an attack. Unfortunately most current defense approaches can not efficiently detect and filter out attack traffic. Our approach is to find the network anomalies by using neural network, deploy the system at distributed routers, identify the attack packets, and then filter them. The marks in the IP header that are generated by a group of IP traceback schemes, Deterministic Packet Marking (DPM)/Flexible Deterministic Packet Marking (FDPM), assist this process of identifying attack packets. The experimental results show that this approach can be used to defend against both intensive and subtle DDoS attacks, and can catch DDoS attacks’ characteristic of starting from multiple sources to a single victim. According to results, we find the marks in IP headers can enhance the sensitivity and accuracy of detection, thus improve the legitimate traffic throughput and reduce attack traffic throughput. Therefore, it can perform well in filtering DDoS attack traffic precisely and effectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The molecular geometry, the three dimensional arrangement of atoms in space, is a major factor determining the properties and reactivity of molecules, biomolecules and macromolecules. Computation of stable molecular conformations can be done by locating minima on the potential energy surface (PES). This is a very challenging global optimization problem because of extremely large numbers of shallow local minima and complicated landscape of PES. This paper illustrates the mathematical and computational challenges on one important instance of the problem, computation of molecular geometry of oligopeptides, and proposes the use of the Extended Cutting Angle Method (ECAM) to solve this problem.

ECAM is a deterministic global optimization technique, which computes tight lower bounds on the values of the objective function and fathoms those part of the domain where the global minimum cannot reside. As with any domain partitioning scheme, its challenge is an extremely large partition of the domain required for accurate lower bounds. We address this challenge by providing an efficient combinatorial algorithm for calculating the lower bounds, and by combining ECAM with a local optimization method, while preserving the deterministic character of ECAM.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustering of multivariate data is a commonly used technique in ecology, and many approaches to clustering are available. The results from a clustering algorithm are uncertain, but few clustering approaches explicitly acknowledge this uncertainty. One exception is Bayesian mixture modelling, which treats all results probabilistically, and allows comparison of multiple plausible classifications of the same data set. We used this method, implemented in the AutoClass program, to classify catchments (watersheds) in the Murray Darling Basin (MDB), Australia, based on their physiographic characteristics (e.g. slope, rainfall, lithology). The most likely classification found nine classes of catchments. Members of each class were aggregated geographically within the MDB. Rainfall and slope were the two most important variables that defined classes. The second-most likely classification was very similar to the first, but had one fewer class. Increasing the nominal uncertainty of continuous data resulted in a most likely classification with five classes, which were again aggregated geographically. Membership probabilities suggested that a small number of cases could be members of either of two classes. Such cases were located on the edges of groups of catchments that belonged to one class, with a group belonging to the second-most likely class adjacent. A comparison of the Bayesian approach to a distance-based deterministic method showed that the Bayesian mixture model produced solutions that were more spatially cohesive and intuitively appealing. The probabilistic presentation of results from the Bayesian classification allows richer interpretation, including decisions on how to treat cases that are intermediate between two or more classes, and whether to consider more than one classification. The explicit consideration and presentation of uncertainty makes this approach useful for ecological investigations, where both data and expectations are often highly uncertain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data streams are usually generated in an online fashion characterized by huge volume, rapid unpredictable rates, and fast changing data characteristics. It has been hence recognized that mining over streaming data requires the problem of limited computational resources to be adequately addressed. Since the arrival rate of data streams can significantly increase and exceed the CPU capacity, the machinery must adapt to this change to guarantee the timeliness of the results. We present an online algorithm to approximate a set of frequent patterns from a sliding window over the underlying data stream - given apriori CPU capacity. The algorithm automatically detects overload situations and can adaptively shed unprocessed data to guarantee the timely results. We theoretically prove, using probabilistic and deterministic techniques, that the error on the output results is bounded within a pre-specified threshold. The empirical results on various datasets also confirmed the feasiblity of our proposal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A retrospective assessment of exposure to benzene was carried out for a nested case control study of lympho-haematopoietic cancers, including leukaemia, in the Australian petroleum industry. Each job or task in the industry was assigned a Base Estimate (BE) of exposure derived from task-based personal exposure assessments carried out by the company occupational hygienists. The BEs corresponded to the estimated arithmetic mean exposure to benzene for each job or task and were used in a deterministic algorithm to estimate the exposure of subjects in the study. Nearly all of the data sets underlying the BEs were found to contain some values below the limit of detection (LOD) of the sampling and analytical methods and some were very heavily censored; up to 95% of the data were below the LOD in some data sets. It was necessary, therefore, to use a method of calculating the arithmetic mean exposures that took into account the censored data. Three different methods were employed in an attempt to select the most appropriate method for the particular data in the study. A common method is to replace the missing (censored) values with half the detection limit. This method has been recommended for data sets where much of the data are below the limit of detection or where the data are highly skewed; with a geometric standard deviation of 3 or more. Another method, involving replacing the censored data with the limit of detection divided by the square root of 2, has been recommended when relatively few data are below the detection limit or where data are not highly skewed. A third method that was examined is Cohen's method. This involves mathematical extrapolation of the left-hand tail of the distribution, based on the distribution of the uncensored data, and calculation of the maximum likelihood estimate of the arithmetic mean. When these three methods were applied to the data in this study it was found that the first two simple methods give similar results in most cases. Cohen's method on the other hand, gave results that were generally, but not always, higher than simpler methods and in some cases gave extremely high and even implausible estimates of the mean. It appears that if the data deviate substantially from a simple log-normal distribution, particularly if high outliers are present, then Cohen's method produces erratic and unreliable estimates. After examining these results, and both the distributions and proportions of censored data, it was decided that the half limit of detection method was most suitable in this particular study.