874 resultados para Federal High Performance Computing Program (U.S.)
Resumo:
The Scheme86 and the HP Precision Architectures represent different trends in computer processor design. The former uses wide micro-instructions, parallel hardware, and a low latency memory interface. The latter encourages pipelined implementation and visible interlocks. To compare the merits of these approaches, algorithms frequently encountered in numerical and symbolic computation were hand-coded for each architecture. Timings were done in simulators and the results were evaluated to determine the speed of each design. Based on these measurements, conclusions were drawn as to which aspects of each architecture are suitable for a high- performance computer.
Resumo:
El síndrome de Williams-Beuren (SWB) es definido como una condición genética cuyo patrón cognitivo se caracteriza principalmente por la presencia de retardo mental leve a moderado, un bajo desempeño en tareas relacionadas con las funciones viso-espaciales y un alto rendimiento en funciones del lenguaje. A pesar de lo anterior, hoy en día no existe un acuerdo general en cuanto al perfil neuropsicológico específico de esta condición en vista del carácter heterogéneo de los cuadros clínicos estudiados en previas investigaciones. El objetivo del presente estudio es realizar una evaluación neuropsicológica a una joven diagnosticada con SWB, para explorar el perfil neuropsicológico y tener una mejor comprensión de las manifestaciones cognitivas de esta condición. Lo anterior teniendo en cuenta los nuevos paradigmas de la discapacidad intelectual, describiendo tanto las debilidades como las fortalezas de las personas con esta condición. Los resultados obtenidos a partir de la evaluación neuropsicológica consistieron fundamentalmente en la conservación de procesos atencionales de tipo auditivo, memoria declarativa explícita anterógrada en rango normal, lenguaje del polo receptivo y motor conservado, un coeficiente intelectual (CI) en 72, ubicado en rango inferior, denotando una inteligencia límite, alteración en habilidades viso-espaciales, limitaciones en funciones ejecutivas, principalmente en planeación y razonamiento abstracto. Lo anterior confirmaría algunos de los aspectos cognitivos señalados en estudios precedentes.
Resumo:
La siguiente monografía busca dar una mirada descriptiva a la cultura corporativa y a su relación con el desempeño organizacional desde la perspectiva de las ciencias de la complejidad. Inicialmente presenta una mirada general de la definición de cultura y caracteriza los sistemas complejos para luego proceder a examinar como algunos fenómenos de la complejidad se ven reflejados en la cultura, revisando la propuesta de Dolan et al, que proponen los valores como atractores en el desempeño. Adicionalmente se examinan distintas formas y definiciones de desempeño organizacional y se identifican algunos estudios que apuntan a la correlación entre culturas fuertes y desempeño. Sin embargo Gordon & DiTomaso concluyen que no se comprende muy bien cómo funciona la relación más allá de la correlación. Finalmente se concluye que la complejidad presenta una opción para explicar cómo puede funcionar la relación entre cultura y desempeño a través de los valores como un elemento cultural que lleva a la emergencia. Sin embargo queda la incógnita sobre la aplicabilidad de estrategias para implementar lo estudiado en organizaciones y en el uso de herramientas de simulación para profundizar en la investigación
Resumo:
En línea con el artículo de Descripción organizacional de Helm Bank, este artículo es un aporte investigativo adicional al Proyecto general de “Descripción de las estructuras organizacionales de las áreas de Responsabilidad Social Empresarial” del investigador Rafael Piñeros de la Universidad del Rosario. Este artículo permite evaluar y entender la perspectiva desde el ángulo de una empresa multinacional que opera en casi todos los países del mundo en el sector de alimentos y bebidas, y que debe lograr adaptar su modelo de negocio y desarrollo sostenible de acuerdo al ambiente donde se desempeña, manteniendo unos lineamientos globales pero adaptados a las necesidades de las comunidades locales. Resulta muy interesante identificar la importancia que para PepsiCo tiene la Responsabilidad Social y su evolución para asegurar unos rendimientos y una sostenibilidad en el largo plazo que permite a la compañía y a las comunidades donde opera seguir creciendo, crear valor compartido y obtener beneficios a lo largo del tiempo.
Resumo:
In this paper I investigate the optimal level of decentralization of tasks for the provision of a local public good. I enrich the well-known trade-off between internalization of spillovers (that favors centralization) and accountability (that favors decentralization) by considering that public goods are produced through multiple tasks. This adds an additional institutional setting, partial decentralization, to the classical choice between full decentralization and full centralization. The main results are that partial decentralization is optimal when both the variance of exogenous shocks to electorate’s utility is large and the electorate expects high performance from politicians. I also show that the optimal institutional setting depends on the degree of substitutability / complementarity between tasks. In particular, I show that a large degree of substitutability between tasks makes favoritism more likely, which increases the desirability of partial decentralization as a safeguard against favoritism.
Resumo:
Introducción: El uso de sustancias químicas como protección contra plagas es una práctica común en las floriculturas, la aplicación de los conceptos de manejo integrado de éstas resguarda los cultivos asegurando la cantidad y la calidad necesaria del producto neto requerido para satisfacer la demanda internacional. El biomonitoreo se ha utilizado en una variedad de estudios ocupacionales y ambientales para determinar exposiciones a plaguicidas. Materiales y Métodos: Se realizó un estudio de corte transversal en 300 trabajadores vinculados a 15 empresas, con el objetivo de caracterizar la exposición al plaguicida Metamidofos en trabajadores de empresas de flores en la Sabana de Bogotá (Cundinamarca) y Rionegro (Antioquia). Se aplicó una encuesta, la cual incluyó las variables socio demográficas, ocupacionales y toxicológicas y se recolectaron muestras de orina para determinar el plaguicida metamidofos. Resultados: Se observó que la fuerza laboral pertenecía en su gran mayoría a la zona urbana (88.5%) y se resalta la participación del género femenino en edades de entre 19 y 58 años con un promedio de 36,7 años. El oficio en el que se encontraba la mayor parte de los trabajadores fue cultivo y corte con el 52.5%. El tiempo de trabajo en el sector floricultor reportado fue mínimo de 12 (3.3%) y máximo de 22 años (6.1%). En cuanto a los elementos de protección personal, el 96.1% (344) afirma utilizarlos en su trabajo. El promedio de metamidofos al inicio de la jornada fue de 29,12 μg/l y al finalizar 15,70 μg/l. Conclusiones: Este trabajo permitió conocer el panorama de la exposición al plaguicida Metamidofos y constituye un aporte para continuar investigando sobre el tema y a la vez hacer seguimiento a los trabajadores por medio de su inclusión en programas de vigilancia epidemiológica.
Resumo:
In real world applications sequential algorithms of data mining and data exploration are often unsuitable for datasets with enormous size, high-dimensionality and complex data structure. Grid computing promises unprecedented opportunities for unlimited computing and storage resources. In this context there is the necessity to develop high performance distributed data mining algorithms. However, the computational complexity of the problem and the large amount of data to be explored often make the design of large scale applications particularly challenging. In this paper we present the first distributed formulation of a frequent subgraph mining algorithm for discriminative fragments of molecular compounds. Two distributed approaches have been developed and compared on the well known National Cancer Institute’s HIV-screening dataset. We present experimental results on a small-scale computing environment.
Resumo:
Frequent pattern discovery in structured data is receiving an increasing attention in many application areas of sciences. However, the computational complexity and the large amount of data to be explored often make the sequential algorithms unsuitable. In this context high performance distributed computing becomes a very interesting and promising approach. In this paper we present a parallel formulation of the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The application is characterized by a highly irregular tree-structured computation. No estimation is available for task workloads, which show a power-law distribution in a wide range. The proposed approach allows dynamic resource aggregation and provides fault and latency tolerance. These features make the distributed application suitable for multi-domain heterogeneous environments, such as computational Grids. The distributed application has been evaluated on the well known National Cancer Institute’s HIV-screening dataset.
Resumo:
In the Biodiversity World (BDW) project we have created a flexible and extensible Web Services-based Grid environment for biodiversity researchers to solve problems in biodiversity and analyse biodiversity patterns. In this environment, heterogeneous and globally distributed biodiversity-related resources such as data sets and analytical tools are made available to be accessed and assembled by users into workflows to perform complex scientific experiments. One such experiment is bioclimatic modelling of the geographical distribution of individual species using climate variables in order to predict past and future climate-related changes in species distribution. Data sources and analytical tools required for such analysis of species distribution are widely dispersed, available on heterogeneous platforms, present data in different formats and lack interoperability. The BDW system brings all these disparate units together so that the user can combine tools with little thought as to their availability, data formats and interoperability. The current Web Servicesbased Grid environment enables execution of the BDW workflow tasks in remote nodes but with a limited scope. The next step in the evolution of the BDW architecture is to enable workflow tasks to utilise computational resources available within and outside the BDW domain. We describe the present BDW architecture and its transition to a new framework which provides a distributed computational environment for mapping and executing workflows in addition to bringing together heterogeneous resources and analytical tools.
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.
Resumo:
OBJECTIVE To investigate the relation between serum concentration of 25-hydroxyvitamin D [25(OH)D] and insulin action and secretion. RESEARCH DESIGN AND METHODS In a cross-sectional study of 446 Pan-European subjects with the metabolic syndrome, insulin action and secretion were assessed by homeostasis model assessment (HOMA) indexes and intravenous glucose tolerance test to calculate acute insulin response, insulin sensitivity, and disposition index. Serum 25(OH)D was measured by high-performance liquid chromatography/mass spectrometry. RESULTS The 25(OH)D3 concentration was 57.1 ± 26.0 nmol/l (mean ± SD), and only 20% of the subjects had 25(OH)D3 levels ≥75 nmol/l. In multiple linear analyses, 25(OH)D3 concentrations were not associated with parameters of insulin action or secretion after adjustment for BMI and other covariates. CONCLUSIONS In a large sample of subjects with the metabolic syndrome, serum concentrations of 25(OH)D3 did not predict insulin action or secretion. Clear evidence that D vitamin status directly influences insulin secretion or action is still lacking.
Resumo:
The introduction of non-toxic fluride compounds as direct replacements for Thorium Fluoride (ThF4) has renewed interest in the use of low index fluoride compounds in high performance infrared filters. This paper reports the results of an investigation into the effects of combining these low index materials, particularly Barium Fluoride (BaF2), with the high index material Lead Telluride (PbTe) in bandpass and edge filters. Infrared filter designs using conventional and the new material ombination are compared, and infrared filters using these material combinations have been manufactured and have been shown to suffer problems with residual stress. A possible solution to this problem utilising Zinc Sulphide (ZnS) layers with compensating compressive stress is discussed.
Resumo:
With the emerging prevalence of smart phones and 4G LTE networks, the demand for faster-better-cheaper mobile services anytime and anywhere is ever growing. The Dynamic Network Optimization (DNO) concept emerged as a solution that optimally and continuously tunes the network settings, in response to varying network conditions and subscriber needs. Yet, the DNO realization is still at infancy, largely hindered by the bottleneck of the lengthy optimization runtime. This paper presents the design and prototype of a novel cloud based parallel solution that further enhances the scalability of our prior work on various parallel solutions that accelerate network optimization algorithms. The solution aims to satisfy the high performance required by DNO, preliminarily on a sub-hourly basis. The paper subsequently visualizes a design and a full cycle of a DNO system. A set of potential solutions to large network and real-time DNO are also proposed. Overall, this work creates a breakthrough towards the realization of DNO.
Resumo:
Very large scale computations are now becoming routinely used as a methodology to undertake scientific research. In this context, `provenance systems' are regarded as the equivalent of the scientist's logbook for in silico experimentation: provenance captures the documentation of the process that led to some result. Using a protein compressibility analysis application, we derive a set of generic use cases for a provenance system. In order to support these, we address the following fundamental questions: what is provenance? how to record it? what is the performance impact for grid execution? what is the performance of reasoning? In doing so, we define a technologyindependent notion of provenance that captures interactions between components, internal component information and grouping of interactions, so as to allow us to analyse and reason about the execution of scientific processes. In order to support persistent provenance in heterogeneous applications, we introduce a separate provenance store, in which provenance documentation can be stored, archived and queried independently of the technology used to run the application. Through a series of practical tests, we evaluate the performance impact of such a provenance system. In summary, we demonstrate that provenance recording overhead of our prototype system remains under 10% of execution time, and we show that the recorded information successfully supports our use cases in a performant manner.
Resumo:
Este estudo teve como objetivo verificar até que ponto o processo de descentralização adotado pelo Departamento de Polícia Técnica da Bahia (DPT-BA) foi eficiente no atendimento às demandas de perícias de Computação Forense geradas pelas Coordenadorias Regionais de Polícia Técnica (CRPTs) do interior do Estado. O DPT-BA foi reestruturado obedecendo aos princípios da descentralização administrativa, seguindo a corrente progressista. Assumiu, com a descentralização, o compromisso de coordenar ações para dar autonomia às unidades do interior do Estado, com a criação de estruturas mínimas em todas as esferas envolvidas, com ampla capacidade de articulação entre si e com prestação de serviços voltados para um modelo de organização pública de alto desempenho. Ao abordar a relação existente entre a descentralização e a eficiência no atendimento à demanda de perícias oriundas do interior do estado da Bahia, o estudo, por limitações instrumentais, se manteve adstrito ao campo das perícias de Computação Forense, que reflete e ilustra, de forma expressiva, o cenário ocorrido nas demais áreas periciais. Inicialmente foram identificadas as abordagens teóricas sobre descentralização, evidenciando as distintas dimensões do conceito, e, em seguida, sobre a Computação Forense. Foram realizadas pesquisa documental no Instituto de Criminalística Afrânio Peixoto (Icap) e pesquisa de campo por meio de entrevistas semiestruturadas com juízes de direito lotados nas varas criminais de comarcas relacionadas ao cenário de pesquisa e com peritos criminais das Coordenações Regionais, das CRPTs e da Coordenação de Computação Forense do Icap. Correlacionando os prazos de atendimento que contemplam o conceito de eficiência definido pelos juízes de direito entrevistados, clientes finais do trabalho pericial e os prazos reais obtidos mediante a pesquisa documental os dados revelaram alto grau de ineficiência, morosidade e inadimplência, além de realidades discrepantes entre capital e interior. A análise das entrevistas realizadas com os peritos criminais revelou um cenário de insatisfação e desmotivação generalizadas, com a centralização quase absoluta do poder decisório, demonstrando que o processo de descentralização praticado serviu, paradoxalmente, como uma ferramenta de viabilização e camuflagem da centralização.