6 resultados para Parallelism
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Magnetic fabric and rock-magnetism studies were performed on the four units of the 578 +/- 3-Ma-old Piracaia pluton (NW of Sao Paulo State, southern Brazil). This intrusion is roughly elliptical (similar to 32 km(2)), composed of (i) coarse-grained monzodiorite (MZD-c), (ii) fine-grained monzodiorite (MZD-f), which is predominant in the pluton, (iii) monzonite heterogeneous (MZN-het), and (iv) quartz syenite (Qz-Sy). Magnetic fabrics were determined by applying both anisotropy of low-field magnetic susceptibility (AMS) and anisotropy of anhysteretic remanent magnetization (AARM). The two fabrics are coaxial. The parallelism between AMS and AARM tensors excludes the presence of a single domain (SD) effect on the AMS fabric of the units. Several rock-magnetism experiments performed in one specimen from each sampled units show that for all of them, the magnetic susceptibility and magnetic fabrics are carried by magnetite grains, which was also observed in the thin sections. Foliations and lineations in the units were successfully determined by applying magnetic methods. Most of the magnetic foliations are steeply dipping or vertical in all units and are roughly parallel to the foliation measured in the field and in the country rocks. In contrast, the magnetic lineations present mostly low plunges for the whole pluton. However, for eight sites, they are steep up to vertical. Thin-section analyses show that rocks from the Piracaia pluton were affected by the regional strain during and after emplacement since magmatic foliation evolves to solid-state fabric in the north of the pluton, indicating that magnetic fabrics in this area of the pluton are related to this strain. Otherwise, the lack of solid-state deformation at outcrop scale and in thin sections precludes deformation in the SW of the pluton. This evidence allows us to interpret the observed magnetic fabrics as primary in origin (magmatic) acquired when the rocks were solidified as a result of magma flow, in which steeply plunging magnetic lineation suggests that a feeder zone could underlie this area.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
Field-Programmable Gate Arrays (FPGAs) are becoming increasingly important in embedded and high-performance computing systems. They allow performance levels close to the ones obtained with Application-Specific Integrated Circuits, while still keeping design and implementation flexibility. However, to efficiently program FPGAs, one needs the expertise of hardware developers in order to master hardware description languages (HDLs) such as VHDL or Verilog. Attempts to furnish a high-level compilation flow (e.g., from C programs) still have to address open issues before broader efficient results can be obtained. Bearing in mind an FPGA available resources, it has been developed LALP (Language for Aggressive Loop Pipelining), a novel language to program FPGA-based accelerators, and its compilation framework, including mapping capabilities. The main ideas behind LALP are to provide a higher abstraction level than HDLs, to exploit the intrinsic parallelism of hardware resources, and to allow the programmer to control execution stages whenever the compiler techniques are unable to generate efficient implementations. Those features are particularly useful to implement loop pipelining, a well regarded technique used to accelerate computations in several application domains. This paper describes LALP, and shows how it can be used to achieve high-performance computing solutions.
Resumo:
Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.
Resumo:
INTRODUCTION: This study evaluated changes in the smile characteristics of patients with maxillary constriction submitted to rapid maxillary expansion (RME). METHODS: The sample consisted of 81 extraoral photographs of maximum smile of 27 patients with mean age of 10 years, before expansion and 3 and 6 months after fixation of the expanding screw. The photographs were analyzed on the software Cef X 2001, with achievement of the following measurements: Transverse smile area, buccal corridors, exposure of maxillary incisors, gingival exposure of maxillary incisors, smile height, upper and lower lip thickness, smile symmetry and smile arch. Statistical analysis was performed by analysis of variance (ANOVA), at a significance level of 5%. RESULTS: RME promoted statistically significant increase in the transverse smile dimension and exposure of maxillary central and lateral incisors; maintenance of right and left side smile symmetry and of the lack of parallelism between the curvature of the maxillary incisal edges and lower lip border. CONCLUSIONS: RME was beneficial for the smile esthetics with the increase of the transverse smile dimension and exposure of maxillary central and lateral incisors.
Resumo:
The main objective of this work is to present an efficient method for phasor estimation based on a compact Genetic Algorithm (cGA) implemented in Field Programmable Gate Array (FPGA). To validate the proposed method, an Electrical Power System (EPS) simulated by the Alternative Transients Program (ATP) provides data to be used by the cGA. This data is as close as possible to the actual data provided by the EPS. Real life situations such as islanding, sudden load increase and permanent faults were considered. The implementation aims to take advantage of the inherent parallelism in Genetic Algorithms in a compact and optimized way, making them an attractive option for practical applications in real-time estimations concerning Phasor Measurement Units (PMUs).