830 resultados para Computation time delay


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bundle adjustment is one of the essential components of the computer vision toolbox. This paper revisits the resection-intersection approach, which has previously been shown to have inferior convergence properties. Modifications are proposed that greatly improve the performance of this method, resulting in a fast and accurate approach. Firstly, a linear triangulation step is added to the intersection stage, yielding higher accuracy and improved convergence rate. Secondly, the effect of parameter updates is tracked in order to reduce wasteful computation; only variables coupled to significantly changing variables are updated. This leads to significant improvements in computation time, at the cost of a small, controllable increase in error. Loop closures are handled effectively without the need for additional network modelling. The proposed approach is shown experimentally to yield comparable accuracy to a full sparse bundle adjustment (20% error increase) while computation time scales much better with the number of variables. Experiments on a progressive reconstruction system show the proposed method to be more efficient by a factor of 65 to 177, and 4.5 times more accurate (increasing over time) than a localised sparse bundle adjustment approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generalization of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics. Also, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement. The comparison results show that the computation using our mapper/reducer placement is much cheaper while still satisfying the computation deadline.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a paper published in FSE 2007, a way of obtaining near-collisions and in theory also collisions for the FORK-256 hash function was presented [8]. The paper contained examples of near-collisions for the compression function, but in practice the attack could not be extended to the full function due to large memory requirements and computation time. In this paper we improve the attack and show that it is possible to find near-collisions in practice for any given value of IV. In particular, this means that the full hash function with the prespecified IV is vulnerable in practice, not just in theory. We exhibit an example near-collision for the complete hash function.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Supine imaging modalities provide valuable 3D information on scoliotic anatomy, but the altered spine geometry between the supine and standing positions affects the Cobb angle measurement. Previous studies report a mean 7°-10° Cobb angle increase from supine to standing, but none have reported the effect of endplate pre-selection or whether other parameters affect this Cobb angle difference. Methods Cobb angles from existing coronal radiographs were compared to those on existing low-dose CT scans taken within three months of the reference radiograph for a group of females with adolescent idiopathic scoliosis. Reformatted coronal CT images were used to measure supine Cobb angles with and without endplate pre-selection (end-plates selected from the radiographs) by two observers on three separate occasions. Inter and intra-observer measurement variability were assessed. Multi-linear regression was used to investigate whether there was a relationship between supine to standing Cobb angle change and eight variables: patient age, mass, standing Cobb angle, Risser sign, ligament laxity, Lenke type, fulcrum flexibility and time delay between radiograph and CT scan. Results Fifty-two patients with right thoracic Lenke Type 1 curves and mean age 14.6 years (SD 1.8) were included. The mean Cobb angle on standing radiographs was 51.9° (SD 6.7). The mean Cobb angle on supine CT images without pre-selection of endplates was 41.1° (SD 6.4). The mean Cobb angle on supine CT images with endplate pre-selection was 40.5° (SD 6.6). Pre-selecting vertebral endplates increased the mean Cobb change by 0.6° (SD 2.3, range −9° to 6°). When free to do so, observers chose different levels for the end vertebrae in 39% of cases. Multi-linear regression revealed a statistically significant relationship between supine to standing Cobb change and fulcrum flexibility (p = 0.001), age (p = 0.027) and standing Cobb angle (p < 0.001). The 95% confidence intervals for intra-observer and inter-observer measurement variability were 3.1° and 3.6°, respectively. Conclusions Pre-selecting vertebral endplates causes minor changes to the mean supine to standing Cobb change. There is a statistically significant relationship between supine to standing Cobb change and fulcrum flexibility such that this difference can be considered a potential alternative measure of spinal flexibility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we have compiled and reviewed the most recent literature, published from January2010 to December 2012, relating to the human exposure, environmental distribution, behaviour, fate and concentration time trends of polybrominated diphenyl ether (PBDE) and hexabromocyclododecane (HBCD) flame retardants, in order to establish their current trends and priorities for future study. Due to the large volume of literature included, we have provided full detail of the reviewed studies as Electronic Supplementary Information and here summarise the most relevant findings. Decreasing time trends for penta-mix PBDE congeners were seen for soils in northern Europe, sewage sludge in Sweden and the USA, carp from a US river, trout from three of the Great Lakes and in Arctic and UK marine mammals and many birds, but increasing time trends continue in Arctic polar bears and some birds at high trophic levels in northern Europe. This is a result of the time delay inherent in long-range atmospheric transport processes. In general, concentrations of BDE209 (the major component of the deca-mix PBDE product) are continuing to increase. Of major concern is the possible/likely debromination of the large reservoir of BDE209 in soils and sediments worldwide, to yield lower brominated congeners which are both more mobile and more toxic, and we have compiled the most recent evidence for the occurrence of this degradation process. Numerous studies reported here reinforce the importance o f this future concern. Time trends for HBCDs are mixed, with both increases and decreases evident in different matrices and locations and, notably, with increasing occurrence in birds of prey.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Smart Card Automated Fare Collection (AFC) data has been extensively exploited to understand passenger behavior, passenger segment, trip purpose and improve transit planning through spatial travel pattern analysis. The literature has been evolving from simple to more sophisticated methods such as from aggregated to individual travel pattern analysis, and from stop-to-stop to flexible stop aggregation. However, the issue of high computing complexity has limited these methods in practical applications. This paper proposes a new algorithm named Weighted Stop Density Based Scanning Algorithm with Noise (WS-DBSCAN) based on the classical Density Based Scanning Algorithm with Noise (DBSCAN) algorithm to detect and update the daily changes in travel pattern. WS-DBSCAN converts the classical quadratic computation complexity DBSCAN to a problem of sub-quadratic complexity. The numerical experiment using the real AFC data in South East Queensland, Australia shows that the algorithm costs only 0.45% in computation time compared to the classical DBSCAN, but provides the same clustering results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As computational models in fields such as medicine and engineering get more refined, resource requirements are increased. In a first instance, these needs have been satisfied using parallel computing and HPC clusters. However, such systems are often costly and lack flexibility. HPC users are therefore tempted to move to elastic HPC using cloud services. One difficulty in making this transition is that HPC and cloud systems are different, and performance may vary. The purpose of this study is to evaluate cloud services as a means to minimise both cost and computation time for large-scale simulations, and to identify which system properties have the most significant impact on performance. Our simulation results show that, while the performance of Virtual CPU (VCPU) is satisfactory, network throughput may lead to difficulties.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis investigated the complexity of busway operation with stopping and non-stopping buses using field data and microscopic simulation modelling. The proposed approach made significant recommendations to transit authorities to achieve the most practicable system capacity for existing and new busways. The empirical equations developed in this research and newly introduced analysis methods will be ideal tools for transit planners to achieve optimal reliability of busways.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study, a non-linear excitation controller using inverse filtering is proposed to damp inter-area oscillations. The proposed controller is based on determining generator flux value for the next sampling time which is obtained by maximising reduction rate of kinetic energy of the system after the fault. The desired flux for the next time interval is obtained using wide-area measurements and the equivalent area rotor angles and velocities are predicted using a non-linear Kalman filter. A supplementary control input for the excitation system, using inverse filtering approach, to track the desired flux is implemented. The inverse filtering approach ensures that the non-linearity introduced because of saturation is well compensated. The efficacy of the proposed controller with and without communication time delay is evaluated on different IEEE benchmark systems including Kundur's two area, Western System Coordinating Council three-area and 16-machine, 68-bus test systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Based on trial interchanges, this paper develops three algorithms for the solution of the placement problem of logic modules in a circuit. A significant decrease in the computation time of such placement algorithms can be achieved by restricting the trial interchanges to only a subset of all the modules in a circuit. The three algorithms are simulated on a DEC 1090 system in Pascal and the performance of these algorithms in terms of total wirelength and computation time is compared with the results obtained by Steinberg, for the 34-module backboard wiring problem. Performance analysis of the first two algorithms reveals that algorithms based on pairwise trial interchanges (2 interchanges) achieve a desired placement faster than the algorithms based on trial N interchanges. The first two algorithms do not perform better than Steinberg's algorithm1, whereas the third algorithm based on trial pairwise interchange among unconnected pairs of modules (UPM) and connected pairs of modules (CPM) performs better than Steinberg's algorithm, both in terms of total wirelength (TWL) and computation time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An on-line algorithm is developed for the location of single cross point faults in a PLA (FPLA). The main feature of the algorithm is the determination of a fault set corresponding to the response obtained for a failed test. For the apparently small number of faults in this set, all other tests are generated and a fault table is formed. Subsequently, an adaptive procedure is used to diagnose the fault. Functional equivalence test is carried out to determine the actual fault class if the adaptive testing results in a set of faults with identical tests. The large amount of computation time and storage required in the determination, a priori, of all the fault equivalence classes or in the construction of a fault dictionary are not needed here. A brief study of functional equivalence among the cross point faults is also made.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An on-line algorithm is developed for the location of single cross point faults in a PLA (FPLA). The main feature of the valgorithm is the determination of a fault set corresponding to the response obtained for a failed test. For the apparently small number of faults in this set, all other tests are generated and a fault table is formed. Subsequently, an adaptive procedure is used to diagnose the fault. Functional equivalence test is carried out to determine the actual fault class if the adaptive testing results in a set of faults with identical tests. The large amount of computation time and storage required in the determination, a priori, of all the fault equivalence classes or in the construction of a fault dictionary are not needed here. A brief study of functional equivalence among the cross point faults is also made.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A computationally efficient agglomerative clustering algorithm based on multilevel theory is presented. Here, the data set is divided randomly into a number of partitions. The samples of each such partition are clustered separately using hierarchical agglomerative clustering algorithm to form sub-clusters. These are merged at higher levels to get the final classification. This algorithm leads to the same classification as that of hierarchical agglomerative clustering algorithm when the clusters are well separated. The advantages of this algorithm are short run time and small storage requirement. It is observed that the savings, in storage space and computation time, increase nonlinearly with the sample size.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have derived a versatile gene-based test for genome-wide association studies (GWAS). Our approach, called VEGAS (versatile gene-based association study), is applicable to all GWAS designs, including family-based GWAS, meta-analyses of GWAS on the basis of summary data, and DNA-pooling-based GWAS, where existing approaches based on permutation are not possible, as well as singleton data, where they are. The test incorporates information from a full set of markers (or a defined subset) within a gene and accounts for linkage disequilibrium between markers by using simulations from the multivariate normal distribution. We show that for an association study using singletons, our approach produces results equivalent to those obtained via permutation in a fraction of the computation time. We demonstrate proof-of-principle by using the gene-based test to replicate several genes known to be associated on the basis of results from a family-based GWAS for height in 11,536 individuals and a DNA-pooling-based GWAS for melanoma in approximately 1300 cases and controls. Our method has the potential to identify novel associated genes; provide a basis for selecting SNPs for replication; and be directly used in network (pathway) approaches that require per-gene association test statistics. We have implemented the approach in both an easy-to-use web interface, which only requires the uploading of markers with their association p-values, and a separate downloadable application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A simple instrument that can provide a sequence of timed pulses for first initiating a transient process and then enabling sampling and recording periodically during the course of the transient event is described. The time delay between the first of these sampling pulses and the start of the transient event is adjustable. This sequence generator has additional features that make it ideal for use in acquiring the waveforms when a storage oscilloscope is used as the recording device. For avoiding the clutter caused by many waveforms being recorded at the same place on an oscilloscope screen such features as displacements of waveforms in the X and Y directions and trace blanking at places where the waveform is not required, have been incorporated. This sequence generator has been employed to acquire a sequence of Raman scattered radiation signals from an adiabatically expanding saturated vapour probed by a flashlamp-pumped dye laser.