882 resultados para Parallel and distributed systems
Resumo:
The effects of agricultural-pastoral and tillage practices on soil microbial populations and activities have not been systematically investigated. The effect of no-tillage (NT), no-tillage agricultural-pastoral integrated systems (NT-I) and conventional tillage (CT) at soil depths of 0-10, 10-20 and 20-30 cm on the microbial populations (bacteria and fungi), biomass-C, potential nitrification, urease and protease activities, total organic matter and total N contents were investigated. The crops used were soybean (in NT, NT-I and CT systems), corn (in NT and NT-I systems) and Tanner grass (Brachiaria sp.) (in NT-I system); a forest system was used as a control. Urease and protease activities, biomass-C and the content of organic matter and total N were higher (p < 0.05) in the forest soil than the other soils. Potential nitrification was significantly higher in the NT-I system in comparison with the other systems. Bacteria numbers were similar in all systems. Fungi counts were similar in the CT and forest, but both were higher than in NT. All of these variables were dependent on the organic matter content and decreased (p < 0.05) from the upper soil layer to the deeper soil layers. These results indicate that the no-tillage agricultural-pasture-integrated systems may be useful for soil conservation.
Resumo:
The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
Digital image processing is a field that demands great processing capacity. As such it becomes relevant to implement software that is based on the distribution of the processing into several nodes divided by computers belonging to the same network. Specifically discussed in this work are distributed algorithms of compression and expansion of images using the discrete cosine transform. The results show that the savings in processing time obtained due to the parallel algorithms in comparison to its sequential equivalents is a function that depends on the resolution of the image and the complexity of the involved calculation; that is efficiency is greater the longer the processing period is in terms of the time involved for the communication between the network points.
Resumo:
Drying kinetics of tomato was studied by using heat pump dryer (HPD) and electric resistance dryers with parallel and crossed airflow. The performance of both systems was evaluated and compared and the influence of temperature, air velocity, and tomato type on the drying kinetics was analyzed. The use of HPD showed to be adequate in the drying process of tomatoes, mainly in relation to the conversion rate of electric energy into thermal energy. The heat pump effective coefficient of performance (COPHT,EF) was between 2.56 and 2.68, with an energy economy of about 40% when compared to the drying system with electric resistance. The Page model could be used to predict drying time of tomato and statistical analysis showed that the model parameters were mainly affected by drying temperature.
Resumo:
In the present work, we propose a model for the statistical distribution of people versus number of steps acquired by them in a learning process, based on competition, learning and natural selection. We consider that learning ability is normally distributed. We found that the number of people versus step acquired by them in a learning process is given through a power law. As competition, learning and selection is also at the core of all economical and social systems, we consider that power-law scaling is a quantitative description of this process in social systems. This gives an alternative thinking in holistic properties of complex systems. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
To simplify computer management, several system administrators are adopting advanced techniques to manage software configuration on grids, but the tight coupling between hardware and software makes every PC an individual managed entity, lowering the scalability and increasing the costs to manage hundreds or thousands of PCs. This paper discusses the feasibility of a distributed virtual machine environment, named Flexlab: a new approach for computer management that combines virtualization and distributed system architectures as the basis of a management system. Flexlab is able to extend the coverage of a computer management solution beyond client operating system limitations and also offers a convenient hardware abstraction, decoupling software and hardware, simplifying computer management. The results obtained in this work indicate that FlexLab is able to overcome the limitations imposed by the coupling between software and hardware, simplifying the management of homogeneous and heterogeneous grids. © 2009 IEEE.
Resumo:
Networked control systems (NCS) are distributed control system in which sensors, actuators and controllers are physically separated and connected through communication networks. NCS represent the evolution of networked control architectures providing greater modularity and control decentralization, ease maintenance and diagnosis and lower cost of implementation. A recent trend in this research topic is the development of NCS using wireless networks which enable interoperability between existing wired and wireless systems. This paper presents the feasibility analysis of using a serial RS-232 to Bluetooth converter as a wireless sensor link in NCS. In order to support this investigation, relevant performance metrics for wireless control applications such as jitter, time delay and messages lost are highlighted and calculated to evaluate the converter capabilities. In addition the control performance of an implemented motor control system using the converter is analyzed. Experimental results led to the conclusion that serial RS-232 Bluetooth converters can be used to implement wireless networked control systems (WNCS) providing transmission rates and closed control loop times which are acceptable for NCS applications. © 2011 IEEE.
Resumo:
Networked control systems (NCS) are distributed control system where the sensors, actuators and controllers are physically separated and connected through communication networks. NCS represent the evolution of networked control architectures providing greater modularity and control decentralization, ease maintenance and diagnosis and lower cost of implementation. A recent trend in this research topic is the development of NCS using wireless networks (WNCS) enabling interoperability between existing wired and wireless systems. This paper evaluates a serial RS-232 ZigBee device as a wireless sensor link in NCS. In order to support this investigation, relevant performance metrics for wireless control applications such as jitter, time delay and messages lost are highlighted and calculated to evaluate the device capabilities. In addition the control performance of an implemented motor control system using the device is analyzed. Experimental results led to the conclusion that serial RS-232 ZigBee devices can be used to implement WNCS and the use of this device delay information in the PID controller discretization can improve the control performance of the system. © 2012 IEEE.
Resumo:
In this paper the dynamics of the ideal and non-ideal Duffing oscillator with chaotic behavior is considered. In order to suppress the chaotic behavior and to control the system, a control signal is introduced in the system dynamics. The control strategy involves the application of two control signals, a nonlinear feedforward control to maintain the controlled system in a periodic orbit, obtained by the harmonic balance method, and a state feedback control, obtained by the state dependent Riccati equation, to bring the system trajectory into the desired periodic orbit. Additionally, the control strategy includes an active magnetorheological damper to actuate on the system. The control force of the damper is a function of the electric current applied in the coil of the damper, that is based on the force given by the controller and on the velocity of the damper piston displacement. Numerical simulations demonstrate the effectiveness of the control strategy in leading the system from any initial condition to a desired orbit, and considering the mathematical model of the damper (MR), it was possible to control the force of the shock absorber (MR), by controlling the applied electric current in the coils of the damper. © 2012 Foundation for Scientific Research and Technological Innovation.
Resumo:
This paper presents a mixed-integer linear programming model to solve the problem of allocating voltage regulators and fixed or switched capacitors (VRCs) in radial distribution systems. The use of a mixed-integer linear model guarantees convergence to optimality using existing optimization software. In the proposed model, the steady-state operation of the radial distribution system is modeled through linear expressions. The results of one test system and one real distribution system are presented in order to show the accuracy as well as the efficiency of the proposed solution technique. An heuristic to obtain the Pareto front for the multiobjective VRCs allocation problem is also presented. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
Autism is a neurodevelopmental disorder characterized by impaired social interaction and communication accompanied with repetitive behavioral patterns and unusual stereotyped interests. Autism is considered a highly heterogeneous disorder with diverse putative causes and associated factors giving rise to variable ranges of symptomatology. Incidence seems to be increasing with time, while the underlying pathophysiological mechanisms remain virtually uncharacterized (or unknown). By systematic review of the literature and a systems biology approach, our aims were to examine the multifactorial nature of autism with its broad range of severity, to ascertain the predominant biological processes, cellular components, and molecular functions integral to the disorder, and finally, to elucidate the most central contributions (genetic and/or environmental) in silico. With this goal, we developed an integrative network model for gene-environment interactions (GENVI model) where calcium (Ca2+) was shown to be its most relevant node. Moreover, considering the present data from our systems biology approach together with the results from the differential gene expression analysis of cerebellar samples from autistic patients, we believe that RAC1, in particular, and the RHO family of GTPases, in general, could play a critical role in the neuropathological events associated with autism. © 2013 Springer Science+Business Media New York.