13 resultados para Parallel Work Experience, Practise, Architecture
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
This paper presents the use of a multiprocessor architecture for the performance improvement of tomographic image reconstruction. Image reconstruction in computed tomography (CT) is an intensive task for single-processor systems. We investigate the filtered image reconstruction suitability based on DSPs organized for parallel processing and its comparison with the Message Passing Interface (MPI) library. The experimental results show that the speedups observed for both platforms were increased in the same direction of the image resolution. In addition, the execution time to communication time ratios (Rt/Rc) as a function of the sample size have shown a narrow variation for the DSP platform in comparison with the MPI platform, which indicates its better performance for parallel image reconstruction.
Resumo:
This paper proposes a parallel hardware architecture for image feature detection based on the Scale Invariant Feature Transform algorithm and applied to the Simultaneous Localization And Mapping problem. The work also proposes specific hardware optimizations considered fundamental to embed such a robotic control system on-a-chip. The proposed architecture is completely stand-alone; it reads the input data directly from a CMOS image sensor and provides the results via a field-programmable gate array coupled to an embedded processor. The results may either be used directly in an on-chip application or accessed through an Ethernet connection. The system is able to detect features up to 30 frames per second (320 x 240 pixels) and has accuracy similar to a PC-based implementation. The achieved system performance is at least one order of magnitude better than a PC-based solution, a result achieved by investigating the impact of several hardware-orientated optimizations oil performance, area and accuracy.
Resumo:
The InteGrade middleware intends to exploit the idle time of computing resources in computer laboratories. In this work we investigate the performance of running parallel applications with communication among processors on the InteGrade grid. As costly communication on a grid can be prohibitive, we explore the so-called systolic or wavefront paradigm to design the parallel algorithms in which no global communication is used. To evaluate the InteGrade middleware we considered three parallel algorithms that solve the matrix chain product problem, the 0-1 Knapsack Problem, and the local sequence alignment problem, respectively. We show that these three applications running under the InteGrade middleware and MPI take slightly more time than the same applications running on a cluster with only LAM-MPI support. The results can be considered promising and the time difference between the two is not substantial. The overhead of the InteGrade middleware is acceptable, in view of the benefits obtained to facilitate the use of grid computing by the user. These benefits include job submission, checkpointing, security, job migration, etc. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Purpose To test the association between night work and work ability, and verify whether the type of contractual employment has any inXuence over this association. Methods Permanent workers (N = 642) and workers with precarious jobs (temporary contract or outsourced; N = 552) were interviewed and Wlled out questionnaires concerning work hours and work ability index. They were classiWed into: never worked at night, ex-night workers, currently working up to Wve nights, and currently working at least six nights/2-week span. Results After adjusting for socio-demography and work variables, current night work was signiWcantly associated with inadequate WAI (vs. day work with no experience in night work) only for precarious workers (OR 2.00, CI 1.01- 3.95 and OR 1.85, CI 1.09-3.13 for those working up to Wve nights and those working at least six nights in 2 weeks, respectively). Conclusions Unequal opportunities at work and little experience in night work among precarious workers may explain their higher susceptibility to night work
Resumo:
Two case studies are presented to describe the process of public school teachers authoring and creating chemistry simulations. They are part of the Virtual Didactic Laboratory for Chemistry, a project developed by the School of the Future of the University of Sao Paulo. the documental analysis of the material produced by two groups of teachers reflects different selection process for both themes and problem-situations when creating simulations. The study demonstrates the potential for chemistry learning with an approach that takes students' everyday lives into account and is based on collaborative work among teachers and researches. Also, from the teachers' perspectives, the possibilities of interaction that a simulation offers for classroom activities are considered.
Resumo:
The control of molecular architectures has been a key factor for the use of Langmuir-Blodgett (LB) films in biosensors, especially because biomolecules can be immobilized with preserved activity. In this paper we investigated the incorporation of tyrosinase (Tyr) in mixed Langmuir films of arachidic acid (AA) and a lutetium bisphthalocyanine (LuPc(2)), which is confirmed by a large expansion in the surface pressure isotherm. These mixed films of AA-LuPc(2) + Tyr could be transferred onto ITO and Pt electrodes as indicated by FTIR and electrochemical measurements, and there was no need for crosslinking of the enzyme molecules to preserve their activity. Significantly, the activity of the immobilised Tyr was considerably higher than in previous work in the literature, which allowed Tyr-containing LB films to be used as highly sensitive voltammetric sensors to detect pyrogallol. Linear responses have been found up to 400 mu M, with a detection limit of 4.87 x 10(-2) mu M (n = 4) and a sensitivity of 1.54 mu A mu M(-1) cm(-2). In addition, the Hill coefficient (h = 1.27) indicates cooperation with LuPc(2) that also acts as a catalyst. The enhanced performance of the LB-based biosensor resulted therefore from a preserved activity of Tyr combined with the catalytic activity of LuPc(2), in a strategy that can be extended to other enzymes and analytes upon varying the LB film architecture.
Resumo:
The project of Information Architecture is one of the initial stages of the project of a website, thus the detection and correction of errors in this stage are easier and time-saving than in the following stages. However, to minimize errors for the projects of information architecture, a methodology is necessary to organize the work of the professional and guarantee the final product quality. The profile of the professional who works with Information Architecture in Brazil has been analyzed (quantitative research by means of a questionnaire on-line) as well as the difficulties, techniques and methodologies found in his projects (qualitative research by means of interviews in depth with support of the approaches of the Sense-Making). One concludes that the methodologies of projects of information architecture need to develop the adoption of the approaches of Design Centered in the User and in the ways to evaluate its results.
Resumo:
The aim of this work is to verify the possibility to correlating specific gravity and wood hardness parallel and perpendicular to the grain. The purpose is to offer one more tool to help in the decision about wood species choice for use in floors and sleepers. To reach this intent, we considered the results of standard tests (NBR 7190:1997, Timber Structures Design, Annex B, Brazilian Association of Technical Standards) to determine hardness parallel and normal to the grain in fourteen tropical high density wood species (over 850 kg/m(3), at 12% moisture content). For each species twelve determinations were made, based on the material obtained at Sao Carlos and its regional wood market. Statistical analysis led to some expressions to describe the cited properties relationships, with a determination coefficient about 0.8.
Resumo:
Every year, the number of discarded electro-electronic products is increasing. For this reason recycling is needed, to avoid wasting non-renewable natural resources. The objective of this work is to study the recycling of materials from parallel wire cable through Unit operations of mineral processing. Parallel wire cables are basically composed of polymer and copper. The following unit operations were tested: grinding, size classification, dense medium separation, electrostatic separation, scrubbing, panning, and elutriation. It was observed that the operations used obtained copper and PVC concentrates with a low degree of cross contamination. It was Concluded that total liberation of the materials was accomplished after grinding to less than 3 mm, using a cage mill. Separation using panning and elutriation presented the best results in terms of recovery and cross contamination. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This work presents a method for predicting resource availability in opportunistic grids by means of use pattern analysis (UPA), a technique based on non-supervised learning methods. This prediction method is based on the assumption of the existence of several classes of computational resource use patterns, which can be used to predict the resource availability. Trace-driven simulations validate this basic assumptions, which also provide the parameter settings for the accurate learning of resource use patterns. Experiments made with an implementation of the UPA method show the feasibility of its use in the scheduling of grid tasks with very little overhead. The experiments also demonstrate the method`s superiority over other predictive and non-predictive methods. An adaptative prediction method is suggested to deal with the lack of training data at initialization. Further adaptative behaviour is motivated by experiments which show that, in some special environments, reliable resource use patterns may not always be detected. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
To test the association between night work and work ability, and verify whether the type of contractual employment has any influence over this association. Permanent workers (N = 642) and workers with precarious jobs (temporary contract or outsourced; N = 552) were interviewed and filled out questionnaires concerning work hours and work ability index. They were classified into: never worked at night, ex-night workers, currently working up to five nights, and currently working at least six nights/2-week span. After adjusting for socio-demography and work variables, current night work was significantly associated with inadequate WAI (vs. day work with no experience in night work) only for precarious workers (OR 2.00, CI 1.01-3.95 and OR 1.85, CI 1.09-3.13 for those working up to five nights and those working at least six nights in 2 weeks, respectively). Unequal opportunities at work and little experience in night work among precarious workers may explain their higher susceptibility to night work.
MAGNETOHYDRODYNAMIC SIMULATIONS OF RECONNECTION AND PARTICLE ACCELERATION: THREE-DIMENSIONAL EFFECTS
Resumo:
Magnetic fields can change their topology through a process known as magnetic reconnection. This process in not only important for understanding the origin and evolution of the large-scale magnetic field, but is seen as a possibly efficient particle accelerator producing cosmic rays mainly through the first-order Fermi process. In this work we study the properties of particle acceleration inserted in reconnection zones and show that the velocity component parallel to the magnetic field of test particles inserted in magnetohydrodynamic (MHD) domains of reconnection without including kinetic effects, such as pressure anisotropy, the Hall term, or anomalous effects, increases exponentially. Also, the acceleration of the perpendicular component is always possible in such models. We find that within contracting magnetic islands or current sheets the particles accelerate predominantly through the first-order Fermi process, as previously described, while outside the current sheets and islands the particles experience mostly drift acceleration due to magnetic field gradients. Considering two-dimensional MHD models without a guide field, we find that the parallel acceleration stops at some level. This saturation effect is, however, removed in the presence of an out-of-plane guide field or in three-dimensional models. Therefore, we stress the importance of the guide field and fully three-dimensional studies for a complete understanding of the process of particle acceleration in astrophysical reconnection environments.
Resumo:
Relevant results for (sub-)distribution functions related to parallel systems are discussed. The reverse hazard rate is defined using the product integral. Consequently, the restriction of absolute continuity for the involved distributions can be relaxed. The only restriction is that the sets of discontinuity points of the parallel distributions have to be disjointed. Nonparametric Bayesian estimators of all survival (sub-)distribution functions are derived. Dual to the series systems that use minimum life times as observations, the parallel systems record the maximum life times. Dirichlet multivariate processes forming a class of prior distributions are considered for the nonparametric Bayesian estimation of the component distribution functions, and the system reliability. For illustration, two striking numerical examples are presented.