996 resultados para parallel technique
Resumo:
Large read-only or read-write transactions with a large read set and a small write set constitute an important class of transactions used in such applications as data mining, data warehousing, statistical applications, and report generators. Such transactions are best supported with optimistic concurrency, because locking of large amounts of data for extended periods of time is not an acceptable solution. The abort rate in regular optimistic concurrency algorithms increases exponentially with the size of the transaction. The algorithm proposed in this dissertation solves this problem by using a new transaction scheduling technique that allows a large transaction to commit safely with significantly greater probability that can exceed several orders of magnitude versus regular optimistic concurrency algorithms. A performance simulation study and a formal proof of serializability and external consistency of the proposed algorithm are also presented.^ This dissertation also proposes a new query optimization technique (lazy queries). Lazy Queries is an adaptive query execution scheme which optimizes itself as the query runs. Lazy queries can be used to find an intersection of sub-queries in a very efficient way, which does not require full execution of large sub-queries nor does it require any statistical knowledge about the data.^ An efficient optimistic concurrency control algorithm used in a massively parallel B-tree with variable-length keys is introduced. B-trees with variable-length keys can be effectively used in a variety of database types. In particular, we show how such a B-tree was used in our implementation of a semantic object-oriented DBMS. The concurrency control algorithm uses semantically safe optimistic virtual "locks" that achieve very fine granularity in conflict detection. This algorithm ensures serializability and external consistency by using logical clocks and backward validation of transactional queries. A formal proof of correctness of the proposed algorithm is also presented. ^
Resumo:
Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.
Resumo:
Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
A parallel method for the dynamic partitioning of unstructured meshes is outlined. The method includes diffusive load-balancing techniques and an iterative optimisation technique known as relative gain optimisationwhich both balances theworkload and attempts to minimise the interprocessor communications overhead. It can also optionally include amultilevel strategy. Experiments on a series of adaptively refined meshes indicate that the algorithmprovides partitions of an equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more rapidly. Perhaps more importantly, the algorithm results in only a small fraction of the amount of data migration compared to the static partitioners.
Resumo:
This chapter describes a parallel optimization technique that incorporates a distributed load-balancing algorithm and provides an extremely fast solution to the problem of load-balancing adaptive unstructured meshes. Moreover, a parallel graph contraction technique can be employed to enhance the partition quality and the resulting strategy outperforms or matches results from existing state-of-the-art static mesh partitioning algorithms. The strategy can also be applied to static partitioning problems. Dynamic procedures have been found to be much faster than static techniques, to provide partitions of similar or higher quality and, in comparison, involve the migration of a fraction of the data. The method employs a new iterative optimization technique that balances the workload and attempts to minimize the interprocessor communications overhead. Experiments on a series of adaptively refined meshes indicate that the algorithm provides partitions of an equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more quickly. The dynamic evolution of load has three major influences on possible partitioning techniques; cost, reuse, and parallelism. The unstructured mesh may be modified every few time-steps and so the load-balancing must have a low cost relative to that of the solution algorithm in between remeshing.
Resumo:
A parallel method for dynamic partitioning of unstructured meshes is described. The method employs a new iterative optimisation technique which both balances the workload and attempts to minimise the interprocessor communications overhead. Experiments on a series of adaptively refined meshes indicate that the algorithm provides partitions of an equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more quickly. Perhaps more importantly, the algorithm results in only a small fraction of the amount of data migration compared to the static partitioners.
Resumo:
A method is outlined for optimising graph partitions which arise in mapping un- structured mesh calculations to parallel computers. The method employs a combination of iterative techniques to both evenly balance the workload and minimise the number and volume of interprocessor communications. They are designed to work efficiently in parallel as well as sequentially and when combined with a fast direct partitioning technique (such as the Greedy algorithm) to give an initial partition, the resulting two-stage process proves itself to be both a powerful and flexible solution to the static graph-partitioning problem. The algorithms can also be used for dynamic load-balancing and a clustering technique can additionally be employed to speed up the whole process. Experiments indicate that the resulting parallel code can provide high quality partitions, independent of the initial partition, within a few seconds.
Resumo:
A method is outlined for optimising graph partitions which arise in mapping unstructured mesh calculations to parallel computers. The method employs a relative gain iterative technique to both evenly balance the workload and minimise the number and volume of interprocessor communications. A parallel graph reduction technique is also briefly described and can be used to give a global perspective to the optimisation. The algorithms work efficiently in parallel as well as sequentially and when combined with a fast direct partitioning technique (such as the Greedy algorithm) to give an initial partition, the resulting two-stage process proves itself to be both a powerful and flexible solution to the static graph-partitioning problem. Experiments indicate that the resulting parallel code can provide high quality partitions, independent of the initial partition, within a few seconds. The algorithms can also be used for dynamic load-balancing, reusing existing partitions and in this case the procedures are much faster than static techniques, provide partitions of similar or higher quality and, in comparison, involve the migration of a fraction of the data.
Resumo:
A parallel method for the dynamic partitioning of unstructured meshes is described. The method introduces a new iterative optimisation technique known as relative gain optimisation which both balances the workload and attempts to minimise the interprocessor communications overhead. Experiments on a series of adaptively refined meshes indicate that the algorithm provides partitions of an equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more rapidly. Perhaps more importantly, the algorithm results in only a small fraction of the amount of data migration compared to the static partitioners.
Resumo:
Underactuated cable-driven parallel robots (UACDPRs) shift a 6-degree-of-freedom end-effector (EE) with fewer than 6 cables. This thesis proposes a new automatic calibration technique that is applicable for under-actuated cable-driven parallel robots. The purpose of this work is to develop a method that uses free motion as an exciting trajectory for the acquisition of calibration data. The key point of this approach is to find a relationship between the unknown parameters to be calibrated (the lengths of the cables) and the parameters that could be measured by sensors (the swivel pulley angles measured by the encoders and roll-and-pitch angles measured by inclinometers on the platform). The equations involved are the geometrical-closure equations and the finite-difference velocity equations, solved using the least-squares algorithm. Simulations are performed on a parallel robot driven by 4 cables for validation. The final purpose of the calibration method is, still, the determination of the platform initial pose. As a consequence of underactuation, the EE is underconstrained and, for assigned cable lengths, the EE pose cannot be obtained by means of forward kinematics only. Hence, a direct-kinematics algorithm for a 4-cable UACDPR using redundant sensor measurements is proposed. The proposed method measures two orientation parameters of the EE besides cable lengths, in order to determine the other four pose variables, namely 3 position coordinates and one additional orientation parameter. Then, we study the performance of the direct-kinematics algorithm through the computation of the sensitivity of the direct-kinematics solution to measurement errors. Furthermore, position and orientation error upper limits are computed for bounded cable lengths errors resulting from the calibration procedure, and roll and pitch angles errors which are due to inclinometer inaccuracies.
Resumo:
To evaluate the outcomes in patients treated for humerus distal third fractures with MIPO technique and visualization of the radial nerve by an accessory approach, in those without radial palsy before surgery. The patients were treated with MIPO technique. The visualization and isolation of the radial nerve was done by an approach between the brachialis and the brachiorradialis, with an oblique incision, in the lateral side of the arm. MEPS was used to evaluate the elbow function. Seven patients were evaluated with a mean age of 29.8 years old. The average follow up was 29.85 months. The radial neuropraxis after surgery occurred in three patients. The sensorial recovery occurred after 3.16 months on average and also of the motor function, after 5.33 months on average, in all patients. We achieved fracture consolidation in all patients (M=4.22 months). The averages for flexion-extension and prono-supination were 112.85° and 145°, respectively. The MEPS average score was 86.42. There was no case of infection. This approach allowed excluding a radial nerve interposition on site of the fracture and/or under the plate, showing a high level of consolidation of the fracture and a good evolution of the range of movement of the elbow. Level of Evidence IV, Case Series.
Resumo:
Abstract Objective. The aim of this study was to evaluate the alteration of human enamel bleached with high concentrations of hydrogen peroxide associated with different activators. Materials and methods. Fifty enamel/dentin blocks (4 × 4 mm) were obtained from human third molars and randomized divided according to the bleaching procedure (n = 10): G1 = 35% hydrogen peroxide (HP - Whiteness HP Maxx); G2 = HP + Halogen lamp (HL); G3 = HP + 7% sodium bicarbonate (SB); G4 = HP + 20% sodium hydroxide (SH); and G5 = 38% hydrogen peroxide (OXB - Opalescence Xtra Boost). The bleaching treatments were performed in three sessions with a 7-day interval between them. The enamel content, before (baseline) and after bleaching, was determined using an FT-Raman spectrometer and was based on the concentration of phosphate, carbonate, and organic matrix. Statistical analysis was performed using two-way ANOVA for repeated measures and Tukey's test. Results. The results showed no significant differences between time of analysis (p = 0.5175) for most treatments and peak areas analyzed; and among bleaching treatments (p = 0.4184). The comparisons during and after bleaching revealed a significant difference in the HP group for the peak areas of carbonate and organic matrix, and for the organic matrix in OXB and HP+SH groups. Tukey's analysis determined that the difference, peak areas, and the interaction among treatment, time and peak was statistically significant (p < 0.05). Conclusion. The association of activators with hydrogen peroxide was effective in the alteration of enamel, mainly with regards to the organic matrix.
Resumo:
Context. The possibility of cephalic venous hypertension with the resultant facial edema and elevated cerebrospinal fluid pressure continues to challenge head and neck surgeons who perform bilateral radical neck dissections during simultaneous or staged procedures. Case Report. The staged procedure in patients who require bilateral neck dissections allows collateral venous drainage to develop, mainly through the internal and external vertebral plexuses, thereby minimizing the risks of deleterious consequences. Nevertheless, this procedure has disadvantages, such as a delay in definitive therapy, the need for a second hospitalization and anesthesia, and the risk of cutting lymphatic vessels and spreading viable cancer cells. In this paper, we discuss the rationale and feasibility of preserving the external jugular vein. Considering the limited number of similar reports in the literature, two cases in which this procedure was accomplished are described. The relevant anatomy and technique are reviewed and the patients' outcomes are discussed. Conclusion. Preservation of the EJV during bilateral neck dissections is technically feasible, fast, and safe, with clinically and radiologically demonstrated patency.
Resumo:
Objective To assess the prevalence of insulin resistance (IR) and associated factors in contraceptive users. Methods A total of 47 women 18 to 40 years of age with a body mass index (kg/m(2)) < 30, fasting glucose levels < 100 mg/dl and 2-hour glucose level < 140 mg/dl after a 75-g oral glucose load were submitted to a hyperinsulinemic-euglycemic clamp. The women were distributed in tertiles regarding M-values. The analysed variables were use of combined hormonal/non-hormonal contraception, duration of use, body composition, lipid profile, glucose levels and blood pressure. Results IR was detected in 19% of the participants. The women with low M-values presented significantly higher body fat mass, waist-to-hip ratio, fasting insulin, HOMA-IR and were nulligravida, showed > 1 year of contraceptive use and higher triglyceride levels. IR was more frequent among combined oral contraceptive users, however no association was observed after regression analysis. Conclusions The prevalence of IR was high among healthy women attending a family planning clinic independent of the contraceptive method used with possible long-term negative consequences regarding their metabolic and cardiovascular health. Although an association between hormonal contraception and IR could not be found this needs further research. Family planning professionals should be proactive counselling healthy women about the importance of healthy habits.