917 resultados para Parallel or distributed processing
Resumo:
The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
Currently, several psychological and non-psychological tests can be found in publishes without standardization on procedures set in different psychological areas, like intelligence, emotional states, attitudes, social skills, vocation, preferences and others. The computerized psychological testing is a extension of traditional testing psychological practices. However, it has own psychometrics qualities, either by its matching in a computerized environment or by the extension that can be developed in it. The current research, developed from a necessity to study process of validity and reliability on a computerized test, drew a methodological structure to provide parallel applications in numerous kinds of operational groups, evaluating the influences of the time and approach in the computerization process. This validity refers to normative values groups, reproducibility in computerized applications process and data processing. Not every psychological test can be computerized. Therefore, our need to find a good test, with quality and plausible properties to transform in computerized application, leaded us to use The Millon Personality Inventory, created by Theodore Millon. This Inventory assesses personality according to 12 bipolarities distributed in 24 factors, distributed in categories motivational styles, cognitive targets and interpersonal relations. This instrument doesn t diagnose pathological features, but test normal and non adaptive aspects in human personality, comparing with Theodore Millon theory of personality. In oder to support this research in a Brazilian context in psychological testing, we discuss the theme, evaluating the advantages and disadvantages of such practices. Also we discuss the current forms in computerization of psychological testing and the main specific criteria in this psychometric specialized area of knowledge. The test was on-line, hosted in the site http://www.planetapsi.com, during the years of 2007 and 2008, which was available a questionnaire to describe social characteristics before test. A report was generated from the data entry of each user. An application of this test was conducted in a linear way through a national coverage in all Brazil regions, getting 1508 applications. Were organized nine groups, reaching 180 applications in test and retest subject, where three periods of time and three forms of retests for studies of on-line tests were separated. Parallel to this, we organized multi-application session offline group, 20 subjects who received tests by email. The subjects of this study were generally distributed by the five Brazilian regions, and were noticed about the test via the Internet. The performance application in traditional and on-line tested groups subsidies us to conclude that on-line application provides significantly consistency in all criteria for validity studied and justifies its use. The on-line test results were related not only among themselves but were similar to those data of tests done on pencil and paper (0,82). The retests results demonstrated correlation, between 0,92 and, 1 while multisessions had a good correlation in these comparisons. Moreover, were assessed the adequacy of operational criteria used, such as security, the performance of users, the environmental characteristics, the organization of the database, operational costs and limitations in this on-line inventory. In all these five items, there were excellent performances, concluding, also, that it s possible a self-applied psychometric test. The results of this work are a guide to question and establish of methodologies studies for computerization psychological testing software in the country
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Most of architectures proposed for developing Distributed Virtual Environment (DVE) allow limited number of users. To support the development of applications using the internet infrastructure, with hundred or, perhaps, thousands users logged simultaneously on DVE, several techniques for managing resources, such as bandwidth and capability of processing, must be implemented. The strategy presented in this paper combines methods to attain the scalability required, In special the multicast protocol at application level.
Resumo:
In order to simplify computer management, several system administrators are adopting advanced techniques to manage software configuration of enterprise computer networks, but the tight coupling between hardware and software makes every PC an individual managed entity, lowering the scalability and increasing the costs to manage hundreds or thousands of PCs. Virtualization is an established technology, however its use is been more focused on server consolidation and virtual desktop infrastructure, not for managing distributed computers over a network. This paper discusses the feasibility of the Distributed Virtual Machine Environment, a new approach for enterprise computer management that combines virtualization and distributed system architecture as the basis of the management architecture. © 2008 IEEE.
Resumo:
In large distributed systems, where shared resources are owned by distinct entities, there is a need to reflect resource ownership in resource allocation. An appropriate resource management system should guarantee that resource's owners have access to a share of resources proportional to the share they provide. In order to achieve that some policies can be used for revoking access to resources currently used by other users. In this paper, a scheduling policy based in the concept of distributed ownership is introduced called Owner Share Enforcement Policy (OSEP). OSEP goal is to guarantee that owner do not have their jobs postponed for longer periods of time. We evaluate the results achieved with the application of this policy using metrics that describe policy violation, loss of capacity, policy cost and user satisfaction in environments with and without job checkpointing. We also evaluate and compare the OSEP policy with the Fair-Share policy, and from these results it is possible to capture the trade-offs from different ways to achieve fairness based on the user satisfaction. © 2009 IEEE.
Resumo:
We evaluated the effect of adding by-products from the processing of oil seeds in the diet of lambs on the carcass and meat traits. Twenty-four non-castrated weaned male Santa Inês lambs with approximately 70 days of age and initial average weight of 19.11 ± 2.12 kg were distributed into a completely randomized design. Treatments consisted of diets containing by-products with 70% of concentrate and 30% of tifton hay (Cynodon spp.) and were termed SM: control with soybean meal; SC: formulated with soybean cake; SUC: formulated with sunflower cake and PC: formulated with peanut cake. Diets had no effects on the carcass traits evaluated. There was no significant effect on the mean values of perirenal, omental and mesenteric fats (0.267, 0.552 and 0.470 kg, respectively) and there was no influence on the percentages of moisture, ether extract, crude protein or ash in the loin between experimental diets. Diets containing by-products from the processing of oil seeds did not change fatty acids found in lamb meat. The use of by-products from oil seeds provided similar carcass and meat traits, thus their use can be recommended as eventual protein and energy sources for feedlot lambs.
Resumo:
This long-term extension of an 8-week randomized, naturalistic study in patients with panic disorder with or without agoraphobia compared the efficacy and safety of clonazepam (n = 47) and paroxetine (n = 37) over a 3-year total treatment duration. Target doses for all patients were 2 mg/d clonazepam and 40 mg/d paroxetine (both taken at bedtime). This study reports data from the long-term period (34 months), following the initial 8-week treatment phase. Thus, total treatment duration was 36 months. Patients with a good primary outcome during acute treatment continued monotherapy with clonazepam or paroxetine, but patients with partial primary treatment success were switched to the combination therapy. At initiation of the long-term study, the mean doses of clonazepam and paroxetine were 1.9 (SD, 0.30) and 38.4 (SD, 3.74) mg/d, respectively. These doses were maintained until month 36 (clonazepam 1.9 [ SD, 0.29] mg/d and paroxetine 38.2 [SD, 3.87] mg/d). Long-term treatment with clonazepam led to a small but significantly better Clinical Global Impression (CGI)-Improvement rating than treatment with paroxetine (mean difference: CGI-Severity scale -3.48 vs -3.24, respectively, P = 0.02; CGI-Improvement scale 1.06 vs 1.11, respectively, P = 0.04). Both treatments similarly reduced the number of panic attacks and severity of anxiety. Patients treated with clonazepam had significantly fewer adverse events than those treated with paroxetine (28.9% vs 70.6%, P < 0.001). The efficacy of clonazepam and paroxetine for the treatment of panic disorder was maintained over the long-term course. There was a significant advantage with clonazepam over paroxetine with respect to the frequency and nature of adverse events.
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
Background: Warfarin-dosing pharmacogenetic algorithms have presented different performances across ethnicities, and the impact in admixed populations is not fully known. Aims: To evaluate the CYP2C9 and VKORC1 polymorphisms and warfarin-predicted metabolic phenotypes according to both self-declared ethnicity and genetic ancestry in a Brazilian general population plus Amerindian groups. Methods: Two hundred twenty-two Amerindians (Tupinikin and Guarani) were enrolled and 1038 individuals from the Brazilian general population who were self-declared as White, Intermediate (Brown, Pardo in Portuguese), or Black. Samples of 274 Brazilian subjects from Sao Paulo were analyzed for genetic ancestry using an Affymetrix 6.0 (R) genotyping platform. The CYP2C9*2 (rs1799853), CYP2C9*3 (rs1057910), and VKORC1 g.-1639G>A (rs9923231) polymorphisms were genotyped in all studied individuals. Results: The allelic frequency for the VKORC1 polymorphism was differently distributed according to self-declared ethnicity: White (50.5%), Intermediate (46.0%), Black (39.3%), Tupinikin (40.1%), and Guarani (37.3%) (p < 0.001), respectively. The frequency of intermediate plus poor metabolizers (IM + PM) was higher in White (28.3%) than in Intermediate (22.7%), Black (20.5%), Tupinikin (12.9%), and Guarani (5.3%), (p < 0.001). For the samples with determined ancestry, subjects carrying the GG genotype for the VKORC1 had higher African ancestry and lower European ancestry (0.14 +/- 0.02 and 0.62 +/- 0.02) than in subjects carrying AA (0.05 +/- 0.01 and 0.73 +/- 0.03) (p = 0.009 and 0.03, respectively). Subjects classified as IM + PM had lower African ancestry (0.08 +/- 0.01) than extensive metabolizers (0.12 +/- 0.01) (p = 0.02). Conclusions: The CYP2C9 and VKORC1 polymorphisms are differently distributed according to self-declared ethnicity or genetic ancestry in the Brazilian general population plus Amerindians. This information is an initial step toward clinical pharmacogenetic implementation, and it could be very useful in strategic planning aiming at an individual therapeutic approach and an adverse drug effect profile prediction in an admixed population.
Resumo:
Background: Proteinaceous toxins are observed across all levels of inter-organismal and intra-genomic conflicts. These include recently discovered prokaryotic polymorphic toxin systems implicated in intra-specific conflicts. They are characterized by a remarkable diversity of C-terminal toxin domains generated by recombination with standalone toxin-coding cassettes. Prior analysis revealed a striking diversity of nuclease and deaminase domains among the toxin modules. We systematically investigated polymorphic toxin systems using comparative genomics, sequence and structure analysis. Results: Polymorphic toxin systems are distributed across all major bacterial lineages and are delivered by at least eight distinct secretory systems. In addition to type-II, these include type-V, VI, VII (ESX), and the poorly characterized "Photorhabdus virulence cassettes (PVC)", PrsW-dependent and MuF phage-capsid-like systems. We present evidence that trafficking of these toxins is often accompanied by autoproteolytic processing catalyzed by HINT, ZU5, PrsW, caspase-like, papain-like, and a novel metallopeptidase associated with the PVC system. We identified over 150 distinct toxin domains in these systems. These span an extraordinary catalytic spectrum to include 23 distinct clades of peptidases, numerous previously unrecognized versions of nucleases and deaminases, ADP-ribosyltransferases, ADP ribosyl cyclases, RelA/SpoT-like nucleotidyltransferases, glycosyltranferases and other enzymes predicted to modify lipids and carbohydrates, and a pore-forming toxin domain. Several of these toxin domains are shared with host-directed effectors of pathogenic bacteria. Over 90 families of immunity proteins might neutralize anywhere between a single to at least 27 distinct types of toxin domains. In some organisms multiple tandem immunity genes or immunity protein domains are organized into polyimmunity loci or polyimmunity proteins. Gene-neighborhood-analysis of polymorphic toxin systems predicts the presence of novel trafficking-related components, and also the organizational logic that allows toxin diversification through recombination. Domain architecture and protein-length analysis revealed that these toxins might be deployed as secreted factors, through directed injection, or via inter-cellular contact facilitated by filamentous structures formed by RHS/YD, filamentous hemagglutinin and other repeats. Phyletic pattern and life-style analysis indicate that polymorphic toxins and polyimmunity loci participate in cooperative behavior and facultative 'cheating' in several ecosystems such as the human oral cavity and soil. Multiple domains from these systems have also been repeatedly transferred to eukaryotes and their viruses, such as the nucleo-cytoplasmic large DNA viruses. Conclusions: Along with a comprehensive inventory of toxins and immunity proteins, we present several testable predictions regarding active sites and catalytic mechanisms of toxins, their processing and trafficking and their role in intra-specific and inter-specific interactions between bacteria. These systems provide insights regarding the emergence of key systems at different points in eukaryotic evolution, such as ADP ribosylation, interaction of myosin VI with cargo proteins, mediation of apoptosis, hyphal heteroincompatibility, hedgehog signaling, arthropod toxins, cell-cell interaction molecules like teneurins and different signaling messengers.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.