962 resultados para Parallel track model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Whether different brain networks are involved in generating unimanual responses to a simple visual stimulus presented in the ipsilateral versus contralateral hemifield remains a controversial issue. Visuo-motor routing was investigated with event-related functional magnetic resonance imaging (fMRI) using the Poffenberger reaction time task. A 2 hemifield x 2 response hand design generated the "crossed" and "uncrossed" conditions, describing the spatial relation between these factors. Both conditions, with responses executed by the left or right hand, showed a similar spatial pattern of activated areas, including striate and extrastriate areas bilaterally, SMA, and M1 contralateral to the responding hand. These results demonstrated that visual information is processed bilaterally in striate and extrastriate visual areas, even in the "uncrossed" condition. Additional analyses based on sorting data according to subjects' reaction times revealed differential crossed versus uncrossed activity only for the slowest trials, with response strength in infero-temporal cortices significantly correlating with crossed-uncrossed differences (CUD) in reaction times. Collectively, the data favor a parallel, distributed model of brain activation. The presence of interhemispheric interactions and its consequent bilateral activity is not determined by the crossed anatomic projections of the primary visual and motor pathways. Distinct visuo-motor networks need not be engaged to mediate behavioral responses for the crossed visual field/response hand condition. While anatomical connectivity heavily influences the spatial pattern of activated visuo-motor pathways, behavioral and functional parameters appear to also affect the strength and dynamics of responses within these pathways.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neural Network has emerged as the topic of the day. The spectrum of its application is as wide as from ECG noise filtering to seismic data analysis and from elementary particle detection to electronic music composition. The focal point of the proposed work is an application of a massively parallel connectionist model network for detection of a sonar target. This task is segmented into: (i) generation of training patterns from sea noise that contains radiated noise of a target, for teaching the network;(ii) selection of suitable network topology and learning algorithm and (iii) training of the network and its subsequent testing where the network detects, in unknown patterns applied to it, the presence of the features it has already learned in. A three-layer perceptron using backpropagation learning is initially subjected to a recursive training with example patterns (derived from sea ambient noise with and without the radiated noise of a target). On every presentation, the error in the output of the network is propagated back and the weights and the bias associated with each neuron in the network are modified in proportion to this error measure. During this iterative process, the network converges and extracts the target features which get encoded into its generalized weights and biases.In every unknown pattern that the converged network subsequently confronts with, it searches for the features already learned and outputs an indication for their presence or absence. This capability for target detection is exhibited by the response of the network to various test patterns presented to it.Three network topologies are tried with two variants of backpropagation learning and a grading of the performance of each combination is subsequently made.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Colombia en su legislación normatiza el sector de la minería de carbón, sin embargo se considera que las estrategias no han sido suficientes para la identificación, prevención y control de la accidentalidad y enfermedad laboral. Durante el año 2013 el índice de fatalidad fue de 1,59. Estadísticas del año 2004 evidencian que las neumoconiosis fueron las mayores causas de invalidez de origen profesional. Objetivo: Categorizar actividades de intervención en promoción y prevención de accidentalidad y enfermedad laboral en trabajadores de la minería de carbón. Metodología: Se realizó una revisión de literatura sobre minería de carbón y salud la cual fue obtenida de las bases de datos PUBMED, Sciendirect, VHL, SINAB por literatura publicada sin límites de año, en idioma inglés, español o portugués. Para la búsqueda se utilizaron términos en lenguaje controlado (términos MESH), revisión por pares de títulos y resúmenes. Las publicaciones fueron seleccionadas para revisión de texto completo bajo criterios de inclusión y exclusión. Los códigos contemplados para esta revisión fueron: a) país donde la intervención se llevó a cabo, b) salud ocupacional, c) prevención de accidentalidad, d) programas de promoción, e) tecnologías, f) resultados obtenidos. Resultados: Del total de 2500 artículos seleccionados por los autores principales se realizó la revisión de los primeros 300 artículos, 32 hacen referencia al tema de salud ocupacional y minería de carbón, 10 contienen intervenciones consideradas de relevancia para esta revisión bibliográfica. Se presentan intervenciones estadísticamente significativas (p<0.05) y que han demostrado ser de impacto positivo en la minería de carbón en promoción y prevención de accidentalidad y enfermedad ocupacional. Conclusiones: Se identificaron las siguientes cuatro tipos de intervención: 1) las de carácter educativo que hacen referencia a las capacitaciones participativas, el entrenamiento por medio de “degraded image”, la realización de gestión de autocontrol y retroalimentación para el uso de elementos de protección personal (EPP), la aplicación del Modelo de Proceso Paralelo Extendido; 2) intervenciones preventivas como la medición de alcoholimetría antes del turno, la presencia de personal de enfermería en minas de carbón y el reconocimiento de los predictores de la enfermedad para optimizar la prevención primaria; 3)intervenciones de vigilancia como la promovida en la metodología Estadísticas Europeas de Accidentes de Trabajo (EEAT) para la investigación de los accidentes de trabajo, la aplicación de las recomendaciones de la Organización Mundial de la Salud (OMS) para la detección de la neumoconiosis y 4) De carácter tecnológico consistente en la intervención de tareas a partir de los resultados de la aplicación del software desarrollado por el Instituto Nacional para la Seguridad y Salud Ocupacional (NIOSH). Estas intervenciones han demostrado ser eficaces en la promoción y prevención de accidentalidad y enfermedad ocupacional por lo cual se recomienda su aplicación en Colombia posterior al análisis de costo-efectividad.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates the impact of aerosol forcing uncertainty on the robustness of estimates of the twentieth-century warming attributable to anthropogenic greenhouse gas emissions. Attribution analyses on three coupled climate models with very different sensitivities and aerosol forcing are carried out. The Third Hadley Centre Coupled Ocean - Atmosphere GCM (HadCM3), Parallel Climate Model (PCM), and GFDL R30 models all provide good simulations of twentieth-century global mean temperature changes when they include both anthropogenic and natural forcings. Such good agreement could result from a fortuitous cancellation of errors, for example, by balancing too much ( or too little) greenhouse warming by too much ( or too little) aerosol cooling. Despite a very large uncertainty for estimates of the possible range of sulfate aerosol forcing obtained from measurement campaigns, results show that the spatial and temporal nature of observed twentieth-century temperature change constrains the component of past warming attributable to anthropogenic greenhouse gases to be significantly greater ( at the 5% level) than the observed warming over the twentieth century. The cooling effects of aerosols are detected in all three models. Both spatial and temporal aspects of observed temperature change are responsible for constraining the relative roles of greenhouse warming and sulfate cooling over the twentieth century. This is because there are distinctive temporal structures in differential warming rates between the hemispheres, between land and ocean, and between mid- and low latitudes. As a result, consistent estimates of warming attributable to greenhouse gas emissions are obtained from all three models, and predictions are relatively robust to the use of more or less sensitive models. The transient climate response following a 1% yr(-1) increase in CO2 is estimated to lie between 2.2 and 4 K century(-1) (5-95 percentiles).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we consider the programming of job rotation in the assembly line worker assignment and balancing problem. The motivation for this study comes from the designing of assembly lines in sheltered work centers for the disabled, where workers have different task execution times. In this context, the well-known training aspects associated with job rotation are particularly desired. We propose a metric along with a mixed integer linear model and a heuristic decomposition method to solve this new job rotation problem. Computational results show the efficacy of the proposed heuristics. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Incluye Bibliografía

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Three structural typologies has been evaluated based on the nonlinear dynamic analysis (i.e. Newmark's methods for MDFs: average acceleration method with Modified Newton-Raphson iteration). Those structural typologies differ each other only for the infills presence and placement. In particular, with the term BARE FRAME: the model of the structure has two identical frames, arranged in parallel. This model constitutes the base for the generation of the other two typologies, through the addition of non-bearing walls. Whereas with the term INFILLED FRAME: the model is achieved by adding twelve infill panels, all placed in the same frame. Finally with the term PILOTIS: the model has been generated to represent structures where the first floor has no walls. Therefore the infills are positioned in only one frame in its three upper floors. All three models have been subjected to ten accelerograms using the software DRAIN 2000.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An Independent And-Parallel Prolog model and implementation, &-Prolog, are described. The description includes a summary of the system's architecture, some details of its execution model (based on the RAP-WAM model), and most importantly, its performance on sequential workstations and shared memory multiprocessors as compared with state-of-the-art Prolog systems. Speedup curves are provided for a collection of benchmark programs which demónstrate significant speed advantages over state-of the art sequential systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern geographical databases, which are at the core of geographic information systems (GIS), store a rich set of aspatial attributes in addition to geographic data. Typically, aspatial information comes in textual and numeric format. Retrieving information constrained on spatial and aspatial data from geodatabases provides GIS users the ability to perform more interesting spatial analyses, and for applications to support composite location-aware searches; for example, in a real estate database: “Find the nearest homes for sale to my current location that have backyard and whose prices are between $50,000 and $80,000”. Efficient processing of such queries require combined indexing strategies of multiple types of data. Existing spatial query engines commonly apply a two-filter approach (spatial filter followed by nonspatial filter, or viceversa), which can incur large performance overheads. On the other hand, more recently, the amount of geolocation data has grown rapidly in databases due in part to advances in geolocation technologies (e.g., GPS-enabled smartphones) that allow users to associate location data to objects or events. The latter poses potential data ingestion challenges of large data volumes for practical GIS databases. In this dissertation, we first show how indexing spatial data with R-trees (a typical data pre-processing task) can be scaled in MapReduce—a widely-adopted parallel programming model for data intensive problems. The evaluation of our algorithms in a Hadoop cluster showed close to linear scalability in building R-tree indexes. Subsequently, we develop efficient algorithms for processing spatial queries with aspatial conditions. Novel techniques for simultaneously indexing spatial with textual and numeric data are developed to that end. Experimental evaluations with real-world, large spatial datasets measured query response times within the sub-second range for most cases, and up to a few seconds for a small number of cases, which is reasonable for interactive applications. Overall, the previous results show that the MapReduce parallel model is suitable for indexing tasks in spatial databases, and the adequate combination of spatial and aspatial attribute indexes can attain acceptable response times for interactive spatial queries with constraints on aspatial data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.

The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.

We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.

Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.