903 resultados para Synchronization algorithms
Resumo:
IEEE Comp Soc, IFIP, Tianjin Normal Univ
The statistic inversion algorithms of water constituents for the Huanghai Sea and the East China Sea
Resumo:
A group of statistical algorithms are proposed for the inversion of the three major components of Case-H waters in the coastal area of the Huanghai Sea and the East China Sea. The algorithms are based on the in situ data collected in the spring of 2003 with strict quality assurance according to NASA ocean bio-optic protocols. These algorithms are the first ones with quantitative confidence that can be applied for the area. The average relative error of the inversed and in situ measured components' concentrations are: Chl-a about 37%, total suspended matter (TSM) about 25%, respectively. This preliminary result is quite satisfactory for Case-H waters, although some aspects in the model need further study. The sensitivity of the input error of 5% to remote sensing reflectance (Rrs) is also analyzed and it shows the algorithms are quite stable. The algorithms show a large difference with Tassan's local SeaWiFS algorithms for different waters, except for the Chl-a algorithm.
Resumo:
Early and intermediate vision algorithms, such as smoothing and discontinuity detection, are often implemented on general-purpose serial, and more recently, parallel computers. Special-purpose hardware implementations of low-level vision algorithms may be needed to achieve real-time processing. This memo reviews and analyzes some hardware implementations of low-level vision algorithms. Two types of hardware implementations are considered: the digital signal processing chips of Ruetz (and Broderson) and the analog VLSI circuits of Carver Mead. The advantages and disadvantages of these two approaches for producing a general, real-time vision system are considered.
Resumo:
This thesis investigates a new approach to lattice basis reduction suggested by M. Seysen. Seysen's algorithm attempts to globally reduce a lattice basis, whereas the Lenstra, Lenstra, Lovasz (LLL) family of reduction algorithms concentrates on local reductions. We show that Seysen's algorithm is well suited for reducing certain classes of lattice bases, and often requires much less time in practice than the LLL algorithm. We also demonstrate how Seysen's algorithm for basis reduction may be applied to subset sum problems. Seysen's technique, used in combination with the LLL algorithm, and other heuristics, enables us to solve a much larger class of subset sum problems than was previously possible.
Resumo:
Objective: To develop sedation, pain, and agitation quality measures using process control methodology and evaluate their properties in clinical practice. Design: A Sedation Quality Assessment Tool was developed and validated to capture data for 12-hour periods of nursing care. Domains included pain/discomfort and sedation-agitation behaviors; sedative, analgesic, and neuromuscular blocking drug administration; ventilation status; and conditions potentially justifying deep sedation. Predefined sedation-related adverse events were recorded daily. Using an iterative process, algorithms were developed to describe the proportion of care periods with poor limb relaxation, poor ventilator synchronization, unnecessary deep sedation, agitation, and an overall optimum sedation metric. Proportion charts described processes over time (2 monthly intervals) for each ICU. The numbers of patients treated between sedation-related adverse events were described with G charts. Automated algorithms generated charts for 12 months of sequential data. Mean values for each process were calculated, and variation within and between ICUs explored qualitatively. Setting: Eight Scottish ICUs over a 12-month period. Patients: Mechanically ventilated patients. Interventions: None. Measurements and Main Results: The Sedation Quality Assessment Tool agitation-sedation domains correlated with the Richmond Sedation Agitation Scale score (Spearman [rho] = 0.75) and were reliable in clinician-clinician (weighted kappa; [kappa] = 0.66) and clinician-researcher ([kappa] = 0.82) comparisons. The limb movement domain had fair correlation with Behavioral Pain Scale ([rho] = 0.24) and was reliable in clinician-clinician ([kappa] = 0.58) and clinician-researcher ([kappa] = 0.45) comparisons. Ventilator synchronization correlated with Behavioral Pain Scale ([rho] = 0.54), and reliability in clinician-clinician ([kappa] = 0.29) and clinician-researcher ([kappa] = 0.42) comparisons was fair-moderate. Eight hundred twenty-five patients were enrolled (range, 59-235 across ICUs), providing 12,385 care periods for evaluation (range 655-3,481 across ICUs). The mean proportion of care periods with each quality metric varied between ICUs: excessive sedation 12-38%; agitation 4-17%; poor relaxation 13-21%; poor ventilator synchronization 8-17%; and overall optimum sedation 45-70%. Mean adverse event intervals ranged from 1.5 to 10.3 patients treated. The quality measures appeared relatively stable during the observation period. Conclusions: Process control methodology can be used to simultaneously monitor multiple aspects of pain-sedation-agitation management within ICUs. Variation within and between ICUs could be used as triggers to explore practice variation, improve quality, and monitor this over time
Resumo:
Neal M J, Boyce D, Rowland J J, Lee M H, and Olivier P L. Robotic grasping by showing: an experimental comparison of two novel algorithms. In Proceedings of IFAC - SICICA'97, pages 345-350, Annecy, France, 1997.
Resumo:
M. Galea and Q. Shen. Simultaneous ant colony optimisation algorithms for learning linguistic fuzzy rules. A. Abraham, C. Grosan and V. Ramos (Eds.), Swarm Intelligence in Data Mining, pages 75-99.
Resumo:
The Google AdSense Program is a successful internet advertisement program where Google places contextual adverts on third-party websites and shares the resulting revenue with each publisher. Advertisers have budgets and bid on ad slots while publishers set reserve prices for the ad slots on their websites. Following previous modelling efforts, we model the program as a two-sided market with advertisers on one side and publishers on the other. We show a reduction from the Generalised Assignment Problem (GAP) to the problem of computing the revenue maximising allocation and pricing of publisher slots under a first-price auction. GAP is APX-hard but a (1-1/e) approximation is known. We compute truthful and revenue-maximizing prices and allocation of ad slots to advertisers under a second-price auction. The auctioneer's revenue is within (1-1/e) second-price optimal.
Resumo:
We consider the general problem of synchronizing the data on two devices using a minimum amount of communication, a core infrastructural requirement for a large variety of distributed systems. Our approach considers the interactive synchronization of prioritized data, where, for example, certain information is more time-sensitive than other information. We propose and analyze a new scheme for efficient priority-based synchronization, which promises benefits over conventional synchronization.
Resumo:
For communication-intensive parallel applications, the maximum degree of concurrency achievable is limited by the communication throughput made available by the network. In previous work [HPS94], we showed experimentally that the performance of certain parallel applications running on a workstation network can be improved significantly if a congestion control protocol is used to enhance network performance. In this paper, we characterize and analyze the communication requirements of a large class of supercomputing applications that fall under the category of fixed-point problems, amenable to solution by parallel iterative methods. This results in a set of interface and architectural features sufficient for the efficient implementation of the applications over a large-scale distributed system. In particular, we propose a direct link between the application and network layer, supporting congestion control actions at both ends. This in turn enhances the system's responsiveness to network congestion, improving performance. Measurements are given showing the efficacy of our scheme to support large-scale parallel computations.
Resumo:
This paper investigates the power of genetic algorithms at solving the MAX-CLIQUE problem. We measure the performance of a standard genetic algorithm on an elementary set of problem instances consisting of embedded cliques in random graphs. We indicate the need for improvement, and introduce a new genetic algorithm, the multi-phase annealed GA, which exhibits superior performance on the same problem set. As we scale up the problem size and test on \hard" benchmark instances, we notice a degraded performance in the algorithm caused by premature convergence to local minima. To alleviate this problem, a sequence of modi cations are implemented ranging from changes in input representation to systematic local search. The most recent version, called union GA, incorporates the features of union cross-over, greedy replacement, and diversity enhancement. It shows a marked speed-up in the number of iterations required to find a given solution, as well as some improvement in the clique size found. We discuss issues related to the SIMD implementation of the genetic algorithms on a Thinking Machines CM-5, which was necessitated by the intrinsically high time complexity (O(n3)) of the serial algorithm for computing one iteration. Our preliminary conclusions are: (1) a genetic algorithm needs to be heavily customized to work "well" for the clique problem; (2) a GA is computationally very expensive, and its use is only recommended if it is known to find larger cliques than other algorithms; (3) although our customization e ort is bringing forth continued improvements, there is no clear evidence, at this time, that a GA will have better success in circumventing local minima.
Resumo:
The performance of a randomized version of the subgraph-exclusion algorithm (called Ramsey) for CLIQUE by Boppana and Halldorsson is studied on very large graphs. We compare the performance of this algorithm with the performance of two common heuristic algorithms, the greedy heuristic and a version of simulated annealing. These algorithms are tested on graphs with up to 10,000 vertices on a workstation and graphs as large as 70,000 vertices on a Connection Machine. Our implementations establish the ability to run clique approximation algorithms on very large graphs. We test our implementations on a variety of different graphs. Our conclusions indicate that on randomly generated graphs minor changes to the distribution can cause dramatic changes in the performance of the heuristic algorithms. The Ramsey algorithm, while not as good as the others for the most common distributions, seems more robust and provides a more even overall performance. In general, and especially on deterministically generated graphs, a combination of simulated annealing with either the Ramsey algorithm or the greedy heuristic seems to perform best. This combined algorithm works particularly well on large Keller and Hamming graphs and has a competitive overall performance on the DIMACS benchmark graphs.
Resumo:
The increased diversity of Internet application requirements has spurred recent interests in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. In this paper, we define a new spectrum of window-based congestion control algorithms that are TCP-friendly as well as TCP-compatible under RED. Contrary to previous memory-less controls, our algorithms utilize history information in their control rules. Our proposed algorithms have two salient features: (1) They enable a wider region of TCP-friendliness, and thus more flexibility in trading off among smoothness, aggressiveness, and responsiveness; and (2) they ensure a faster convergence to fairness under a wide range of system conditions. We demonstrate analytically and through extensive ns simulations the steady-state and transient behaviors of several instances of this new spectrum of algorithms. In particular, SIMD is one instance in which the congestion window is increased super-linearly with time since the detection of the last loss. Compared to recently proposed TCP-friendly AIMD and binomial algorithms, we demonstrate the superiority of SIMD in: (1) adapting to sudden increases in available bandwidth, while maintaining competitive smoothness and responsiveness; and (2) rapidly converging to fairness and efficiency.
Resumo:
The increasing diversity of Internet application requirements has spurred recent interest in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The control rules are parameterized so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. This paper presents a comprehensive study of a new spectrum of window-based congestion controls, which are TCP-friendly as well as TCP-compatible under RED. Our controls utilize history information in their control rules. By doing so, they improve the transient behavior, compared to recently proposed slowly-responsive congestion controls such as general AIMD and binomial controls. Our controls can achieve better tradeoffs among smoothness, aggressiveness, and responsiveness, and they can achieve faster convergence. We demonstrate analytically and through extensive ns simulations the steady-state and transient behavior of several instances of this new spectrum.