17 resultados para parallel execution
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
This long-term extension of an 8-week randomized, naturalistic study in patients with panic disorder with or without agoraphobia compared the efficacy and safety of clonazepam (n = 47) and paroxetine (n = 37) over a 3-year total treatment duration. Target doses for all patients were 2 mg/d clonazepam and 40 mg/d paroxetine (both taken at bedtime). This study reports data from the long-term period (34 months), following the initial 8-week treatment phase. Thus, total treatment duration was 36 months. Patients with a good primary outcome during acute treatment continued monotherapy with clonazepam or paroxetine, but patients with partial primary treatment success were switched to the combination therapy. At initiation of the long-term study, the mean doses of clonazepam and paroxetine were 1.9 (SD, 0.30) and 38.4 (SD, 3.74) mg/d, respectively. These doses were maintained until month 36 (clonazepam 1.9 [ SD, 0.29] mg/d and paroxetine 38.2 [SD, 3.87] mg/d). Long-term treatment with clonazepam led to a small but significantly better Clinical Global Impression (CGI)-Improvement rating than treatment with paroxetine (mean difference: CGI-Severity scale -3.48 vs -3.24, respectively, P = 0.02; CGI-Improvement scale 1.06 vs 1.11, respectively, P = 0.04). Both treatments similarly reduced the number of panic attacks and severity of anxiety. Patients treated with clonazepam had significantly fewer adverse events than those treated with paroxetine (28.9% vs 70.6%, P < 0.001). The efficacy of clonazepam and paroxetine for the treatment of panic disorder was maintained over the long-term course. There was a significant advantage with clonazepam over paroxetine with respect to the frequency and nature of adverse events.
Resumo:
In this article, we introduce two new variants of the Assembly Line Worker Assignment and Balancing Problem (ALWABP) that allow parallelization of and collaboration between heterogeneous workers. These new approaches suppose an additional level of complexity in the Line Design and Assignment process, but also higher flexibility; which may be particularly useful in practical situations where the aim is to progressively integrate slow or limited workers in conventional assembly lines. We present linear models and heuristic procedures for these two new problems. Computational results show the efficiency of the proposed approaches and the efficacy of the studied layouts in different situations. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Data visualization techniques are powerful in the handling and analysis of multivariate systems. One such technique known as parallel coordinates was used to support the diagnosis of an event, detected by a neural network-based monitoring system, in a boiler at a Brazilian Kraft pulp mill. Its attractiveness is the possibility of the visualization of several variables simultaneously. The diagnostic procedure was carried out step-by-step going through exploratory, explanatory, confirmatory, and communicative goals. This tool allowed the visualization of the boiler dynamics in an easier way, compared to commonly used univariate trend plots. In addition it facilitated analysis of other aspects, namely relationships among process variables, distinct modes of operation and discrepant data. The whole analysis revealed firstly that the period involving the detected event was associated with a transition between two distinct normal modes of operation, and secondly the presence of unusual changes in process variables at this time.
Resumo:
Consider the NP-hard problem of, given a simple graph G, to find a series-parallel subgraph of G with the maximum number of edges. The algorithm that, given a connected graph G, outputs a spanning tree of G, is a 1/2-approximation. Indeed, if n is the number of vertices in G, any spanning tree in G has n-1 edges and any series-parallel graph on n vertices has at most 2n-3 edges. We present a 7/12 -approximation for this problem and results showing the limits of our approach.
Resumo:
As in the case of most small organic molecules, the electro-oxidation of methanol to CO2 is believed to proceed through a so-called dual pathway mechanism. The direct pathway proceeds via reactive intermediates such as formaldehyde or formic acid, whereas the indirect pathway occurs in parallel, and proceeds via the formation of adsorbed carbon monoxide (COad). Despite the extensive literature on the electro-oxidation of methanol, no study to date distinguished the production of CO2 from direct and indirect pathways. Working under, far-from-equilibrium, oscillatory conditions, we were able to decouple, for the first time, the direct and indirect pathways that lead to CO2 during the oscillatory electro-oxidation of methanol on platinum. The CO2 production was followed by differential electrochemical mass spectrometry and the individual contributions of parallel pathways were identified by a combination of experiments and numerical simulations. We believe that our report opens some perspectives, particularly as a methodology to be used to identify the role played by surface modifiers in the relative weight of both pathways-a key issue to the effective development of catalysts for low temperature fuel cells.
Resumo:
Objective: Gastric development depends directly on the proliferation and differentiation of epithelial cells, and these processes are controlled by multiple elements, such as diet, hormones, and growth factors. Protein restriction affects gastrointestinal functions, but its effects on gastric growth are not fully understood. Methods: The present study evaluated cell proliferation in the gastric epithelia of rats subjected to protein restriction since gestation. Because ghrelin is increasingly expressed from the fetal to the weaning stages and might be part of growth regulation, its distribution in the stomach of rats was investigated at 14, 30, and 50 d old. Results: Although the protein restriction at 8% increased the intake of food and body weight, the body mass was lower (P < 0.05). The stomach and intestine were also smaller but increased proportionately throughout treatment. Cell proliferation was estimated through DNA synthesis and metaphase indices, and lower rates (P < 0.05) were detected at the different ages. The inhibition was concomitant with a larger number of ghrelin-immunolabeled cells at 30 and 50 d postnatally. Conclusion: Protein restriction impairs cell proliferation in the gastric epithelium, and a ghrelin upsurge under this condition is parallel to lower gastric and body growth rates. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
This study investigated the influence of cueing on the performance of untrained and trained complex motor responses. Healthy adults responded to a visual target by performing four sequential movements (complex response) or a single movement (simple response) of their middle finger. A visual cue preceded the target by an interval of 300, 1000, or 2000 ms. In Experiment 1, the complex and simple responses were not previously trained. During the testing session, the complex response pattern varied on a trial-by-trial basis following the indication provided by the visual cue. In Experiment 2, the complex response and the simple response were extensively trained beforehand. During the testing session, the trained complex response pattern was performed in all trials. The latency of the untrained and trained complex responses decreased from the short to the medium and long cue-target intervals. The latency of the complex response was longer than that of the simple response, except in the case of the trained responses and the long cue-target interval. These results suggest that the preparation of untrained complex responses cannot be completed in advance, this being possible, however, for trained complex responses when enough time is available. The duration of the 1st submovement, 1st pause and 2nd submovement of the untrained and the trained complex responses increased from the short to the long cue-target interval, suggesting that there is an increase of online programming of the response possibly related to the degree of certainty about the moment of target appearance.
Resumo:
We study a strongly interacting "quantum dot 1" and a weakly interacting "dot 2" connected in parallel to metallic leads. Gate voltages can drive the system between Kondo-quenched and non-Kondo free-moment phases separated by Kosterlitz-Thouless quantum phase transitions. Away from the immediate vicinity of the quantum phase transitions, the physical properties retain signatures of first-order transitions found previously to arise when dot 2 is strictly noninteracting. As interactions in dot 2 become stronger relative to the dot-lead coupling, the free moment in the non-Kondo phase evolves smoothly from an isolated spin-one-half in dot 1 to a many-body doublet arising from the incomplete Kondo compensation by the leads of a combined dot spin-one. These limits, which feature very different spin correlations between dot and lead electrons, can be distinguished by weak-bias conductance measurements performed at finite temperatures.
Resumo:
Introduction: The saccadic paradigm has been used to investigate specific cortical networks involving attention. The behavioral and electrophysiological investigations of the SEM contribute significantly to the understanding of attentive patterns presented of neurological and psychiatric disorders and sports performance. Objective: The current study aimed to investigate absolute alpha power changes in sensorimotor brain regions and the frontal eye fields during the execution of a saccadic task. Methods: Twelve healthy volunteers (mean age: 26.25; SD: +/- 4.13) performed a saccadic task while the electroencephalographic signal was simultaneously recorded for the cerebral cortex electrodes. The participants were instructed to follow the LEDs with their eyes, being submitted to two different task conditions: a fixed pattern versus a random pattern. Results: We found a moment main effect for the C3, C4, F3 and F4 electrodes and a condition main effect for the F3 electrode. We also found interaction between factor conditions and frontal electrodes. Conclusions: We conclude that absolute alpha power in the left frontal cortex discriminates the execution of the two stimulus presentation patterns during SEM. (C) 2012 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
Field-Programmable Gate Arrays (FPGAs) are becoming increasingly important in embedded and high-performance computing systems. They allow performance levels close to the ones obtained with Application-Specific Integrated Circuits, while still keeping design and implementation flexibility. However, to efficiently program FPGAs, one needs the expertise of hardware developers in order to master hardware description languages (HDLs) such as VHDL or Verilog. Attempts to furnish a high-level compilation flow (e.g., from C programs) still have to address open issues before broader efficient results can be obtained. Bearing in mind an FPGA available resources, it has been developed LALP (Language for Aggressive Loop Pipelining), a novel language to program FPGA-based accelerators, and its compilation framework, including mapping capabilities. The main ideas behind LALP are to provide a higher abstraction level than HDLs, to exploit the intrinsic parallelism of hardware resources, and to allow the programmer to control execution stages whenever the compiler techniques are unable to generate efficient implementations. Those features are particularly useful to implement loop pipelining, a well regarded technique used to accelerate computations in several application domains. This paper describes LALP, and shows how it can be used to achieve high-performance computing solutions.
Resumo:
This paper presents a new parallel methodology for calculating the determinant of matrices of the order n, with computational complexity O(n), using the Gauss-Jordan Elimination Method and Chio's Rule as references. We intend to present our step-by-step methodology using clear mathematical language, where we will demonstrate how to calculate the determinant of a matrix of the order n in an analytical format. We will also present a computational model with one sequential algorithm and one parallel algorithm using a pseudo-code.
Resumo:
Cutting and packing problems are found in numerous industries such as garment, wood and shipbuilding. The collision free region concept is presented, as it represents all the translations possible for an item to be inserted into a container with already placed items. The often adopted nofit polygon concept and its analogous concept inner fit polygon are used to determine the collision free region. Boolean operations involving nofit polygons and inner fit polygons are used to determine the collision free region. New robust non-regularized Boolean operations algorithm is proposed to determine the collision free region. The algorithm is capable of dealing with degenerated boundaries. This capability is important because degenerated boundaries often represent local optimal placements. A parallelized version of the algorithm is also proposed and tests are performed in order to determine the execution times of both the serial and parallel versions of the algorithm.
Resumo:
Parallel kinematic structures are considered very adequate architectures for positioning and orienti ng the tools of robotic mechanisms. However, developing dynamic models for this kind of systems is sometimes a difficult task. In fact, the direct application of traditional methods of robotics, for modelling and analysing such systems, usually does not lead to efficient and systematic algorithms. This work addre sses this issue: to present a modular approach to generate the dynamic model and through some convenient modifications, how we can make these methods more applicable to parallel structures as well. Kane’s formulati on to obtain the dynamic equations is shown to be one of the easiest ways to deal with redundant coordinates and kinematic constraints, so that a suitable c hoice of a set of coordinates allows the remaining of the modelling procedure to be computer aided. The advantages of this approach are discussed in the modelling of a 3-dof parallel asymmetric mechanisms.