904 resultados para Parallel and Distributed Processing
Resumo:
Following striate cortex damage in monkeys and humans there can be residual function mediated by parallel visual pathways. In humans this can sometimes be associated with a “feeling” that something has happened, especially with rapid movement or abrupt onset. For less transient events, discriminative performance may still be well above chance even when the subject reports no conscious awareness of the stimulus. In a previous study we examined parameters that yield good residual visual performance in the “blind” hemifield of a subject with unilateral damage to the primary visual cortex. With appropriate parameters we demonstrated good discriminative performance, both with and without conscious awareness of a visual event. These observations raise the possibility of imaging the brain activity generated in the “aware” and the “unaware” modes, with matched levels of discrimination performance, and hence of revealing patterns of brain activation associated with visual awareness. The intact hemifield also allows a comparison with normal vision. Here we report the results of a functional magnetic resonance imaging study on the same subject carried out under aware and unaware stimulus conditions. The results point to a shift in the pattern of activity from neocortex in the aware mode, to subcortical structures in the unaware mode. In the aware mode prestriate and dorsolateral prefrontal cortices (area 46) are active. In the unaware mode the superior colliculus is active, together with medial and orbital prefrontal cortical sites.
Resumo:
The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.
Resumo:
"UILU-ENG 78 1745."
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
As massive data sets become increasingly available, people are facing the problem of how to effectively process and understand these data. Traditional sequential computing models are giving way to parallel and distributed computing models, such as MapReduce, both due to the large size of the data sets and their high dimensionality. This dissertation, as in the same direction of other researches that are based on MapReduce, tries to develop effective techniques and applications using MapReduce that can help people solve large-scale problems. Three different problems are tackled in the dissertation. The first one deals with processing terabytes of raster data in a spatial data management system. Aerial imagery files are broken into tiles to enable data parallel computation. The second and third problems deal with dimension reduction techniques that can be used to handle data sets of high dimensionality. Three variants of the nonnegative matrix factorization technique are scaled up to factorize matrices of dimensions in the order of millions in MapReduce based on different matrix multiplication implementations. Two algorithms, which compute CANDECOMP/PARAFAC and Tucker tensor decompositions respectively, are parallelized in MapReduce based on carefully partitioning the data and arranging the computation to maximize data locality and parallelism.
Resumo:
This study addressed the use of conventional and vegetable origin polyurethane foams to extract C. I. Acid Orange 61 dye. The quantitative determination of the residual dye was carried out with an UV/Vis absorption spectrophotometer. The extraction of the dye was found to depend on various factors such as pH of the solution, foam cell structure, contact time and dye and foam interactions. After 45 days, better results were obtained for conventional foam when compared to vegetable foam. Despite presenting a lower percentage of extraction, vegetable foam is advantageous as it is considered a polymer with biodegradable characteristics.
Resumo:
Maltose-binding protein is the periplasmic component of the ABC transporter responsible for the uptake of maltose/maltodextrins. The Xanthomonas axonopodis pv. citri maltose-binding protein MalE has been crystallized at 293 Kusing the hanging-drop vapour-diffusion method. The crystal belonged to the primitive hexagonal space group P6(1)22, with unit-cell parameters a = 123.59, b = 123.59, c = 304.20 angstrom, and contained two molecules in the asymetric unit. It diffracted to 2.24 angstrom resolution.
Resumo:
The aim of this work is to verify the possibility to correlating specific gravity and wood hardness parallel and perpendicular to the grain. The purpose is to offer one more tool to help in the decision about wood species choice for use in floors and sleepers. To reach this intent, we considered the results of standard tests (NBR 7190:1997, Timber Structures Design, Annex B, Brazilian Association of Technical Standards) to determine hardness parallel and normal to the grain in fourteen tropical high density wood species (over 850 kg/m(3), at 12% moisture content). For each species twelve determinations were made, based on the material obtained at Sao Carlos and its regional wood market. Statistical analysis led to some expressions to describe the cited properties relationships, with a determination coefficient about 0.8.
Resumo:
Inhibitors of proteolytic enzymes (proteases) are emerging as prospective treatments for diseases such as AIDS and viral infections, cancers, inflammatory disorders, and Alzheimer's disease. Generic approaches to the design of protease inhibitors are limited by the unpredictability of interactions between, and structural changes to, inhibitor and protease during binding. A computer analysis of superimposed crystal structures for 266 small molecule inhibitors bound to 48 proteases (16 aspartic, 17 serine, 8 cysteine, and 7 metallo) provides the first conclusive proof that inhibitors, including substrate analogues, commonly bind in an extended beta-strand conformation at the active sites of all these proteases. Representative superimposed structures are shown for (a) multiple inhibitors bound to a protease of each class, (b) single inhibitors each bound to multiple proteases, and (c) conformationally constrained inhibitors bound to proteases. Thus inhibitor/substrate conformation, rather than sequence/composition alone, influences protease recognition, and this has profound implications for inhibitor design. This conclusion is supported by NMR, CD, and binding studies for HIV-1 protease inhibitors/ substrates which, when preorganized in an extended conformation, have significantly higher protease affinity. Recognition is dependent upon conformational equilibria since helical and turn peptide conformations are not processed by proteases. Conformational selection explains the resistance of folded/structured regions of proteins to proteolytic degradation, the susceptibility of denatured proteins to processing, and the higher affinity of conformationally constrained 'extended' inhibitors/substrates for proteases. Other approaches to extended inhibitor conformations should similarly lead to high-affinity binding to a protease.
Resumo:
Although the effects of cannabis on perception are well documented, little is known about their neural basis or how these may contribute to the formation of psychotic symptoms. We used functional magnetic resonance imaging (fMRI) to assess the effects of Delta-9-tetrahydrocannabinol (THC) and cannabidiol (CBD) during visual and auditory processing in healthy volunteers. In total, 14 healthy volunteers were scanned on three occasions. Identical 10mg THC, 600mg CBD, and placebo capsules were allocated in a balanced double-blinded pseudo-randomized crossover design. Plasma levels of each substance, physiological parameters, and measures of psychopathology were taken at baseline and at regular intervals following ingestion of substances. Volunteers listened passively to words read and viewed a radial visual checkerboard in alternating blocks during fMRI scanning. Administration of THC was associated with increases in anxiety, intoxication, and positive psychotic symptoms, whereas CBD had no significant symptomatic effects. THC decreased activation relative to placebo in bilateral temporal cortices during auditory processing, and increased and decreased activation in different visual areas during visual processing. CBD was associated with activation in right temporal cortex during auditory processing, and when contrasted, THC and CBD had opposite effects in the right posterior superior temporal gyrus, the right-sided homolog to Wernicke`s area. Moreover, the attenuation of activation in this area (maximum 61, -15, -2) by THC during auditory processing was correlated with its acute effect on psychotic symptoms. Single doses of THC and CBD differently modulate brain function in areas that process auditory and visual stimuli and relate to induced psychotic symptoms. Neuropsychopharmacology (2011) 36, 1340-1348; doi:10.1038/npp.2011.17; published online 16 March 2011
Resumo:
Two hazard risk assessment matrices for the ranking of occupational health risks are described. The qualitative matrix uses qualitative measures of probability and consequence to determine risk assessment codes for hazard-disease combinations. A walk-through survey of an underground metalliferous mine and concentrator is used to demonstrate how the qualitative matrix can be applied to determine priorities for the control of occupational health hazards. The semi-quantitative matrix uses attributable risk as a quantitative measure of probability and uses qualitative measures of consequence. A practical application of this matrix is the determination of occupational health priorities using existing epidemiological studies. Calculated attributable risks from epidemiological studies of hazard-disease combinations in mining and minerals processing are used as examples. These historic response data do not reflect the risks associated with current exposures. A method using current exposure data, known exposure-response relationships and the semi-quantitative matrix is proposed for more accurate and current risk rankings.
Resumo:
A two terminal optically addressed image processing device based on two stacked sensing/switching p-i-n a-SiC:H diodes is presented. The charge packets are injected optically into the p-i-n sensing photodiode and confined at the illuminated regions changing locally the electrical field profile across the p-i-n switching diode. A red scanner is used for charge readout. The various design parameters and addressing architecture trade-offs are discussed. The influence on the transfer functions of an a-SiC:H sensing absorber optimized for red transmittance and blue collection or of a floating anode in between is analysed. Results show that the thin a-SiC:H sensing absorber confines the readout to the switching diode and filters the light allowing full colour detection at two appropriated voltages. When the floating anode is used the spectral response broadens, allowing B&W image recognition with improved light-to-dark sensitivity. A physical model supports the image and colour recognition process.
Resumo:
Coronary artery disease (CAD) is currently one of the most prevalent diseases in the world population and calcium deposits in coronary arteries are one direct risk factor. These can be assessed by the calcium score (CS) application, available via a computed tomography (CT) scan, which gives an accurate indication of the development of the disease. However, the ionising radiation applied to patients is high. This study aimed to optimise the protocol acquisition in order to reduce the radiation dose and explain the flow of procedures to quantify CAD. The main differences in the clinical results, when automated or semiautomated post-processing is used, will be shown, and the epidemiology, imaging, risk factors and prognosis of the disease described. The software steps and the values that allow the risk of developingCADto be predicted will be presented. A64-row multidetector CT scan with dual source and two phantoms (pig hearts) were used to demonstrate the advantages and disadvantages of the Agatston method. The tube energy was balanced. Two measurements were obtained in each of the three experimental protocols (64, 128, 256 mAs). Considerable changes appeared between the values of CS relating to the protocol variation. The predefined standard protocol provided the lowest dose of radiation (0.43 mGy). This study found that the variation in the radiation dose between protocols, taking into consideration the dose control systems attached to the CT equipment and image quality, was not sufficient to justify changing the default protocol provided by the manufacturer.
Resumo:
Nos últimos anos começaram a ser vulgares os computadores dotados de multiprocessadores e multi-cores. De modo a aproveitar eficientemente as novas características desse hardware começaram a surgir ferramentas para facilitar o desenvolvimento de software paralelo, através de linguagens e frameworks, adaptadas a diferentes linguagens. Com a grande difusão de redes de alta velocidade, tal como Gigabit Ethernet e a última geração de redes Wi-Fi, abre-se a oportunidade de, além de paralelizar o processamento entre processadores e cores, poder em simultâneo paralelizá-lo entre máquinas diferentes. Ao modelo que permite paralelizar processamento localmente e em simultâneo distribuí-lo para máquinas que também têm capacidade de o paralelizar, chamou-se “modelo paralelo distribuído”. Nesta dissertação foram analisadas técnicas e ferramentas utilizadas para fazer programação paralela e o trabalho que está feito dentro da área de programação paralela e distribuída. Tendo estes dois factores em consideração foi proposta uma framework que tenta aplicar a simplicidade da programação paralela ao conceito paralelo distribuído. A proposta baseia-se na disponibilização de uma framework em Java com uma interface de programação simples, de fácil aprendizagem e legibilidade que, de forma transparente, é capaz de paralelizar e distribuir o processamento. Apesar de simples, existiu um esforço para a tornar configurável de forma a adaptar-se ao máximo de situações possível. Nesta dissertação serão exploradas especialmente as questões relativas à execução e distribuição de trabalho, e a forma como o código é enviado de forma automática pela rede, para outros nós cooperantes, evitando assim a instalação manual das aplicações em todos os nós da rede. Para confirmar a validade deste conceito e das ideias defendidas nesta dissertação foi implementada esta framework à qual se chamou DPF4j (Distributed Parallel Framework for JAVA) e foram feitos testes e retiradas métricas para verificar a existência de ganhos de performance em relação às soluções já existentes.