877 resultados para High-performance computing hyperspectral imaging


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Scheme86 and the HP Precision Architectures represent different trends in computer processor design. The former uses wide micro-instructions, parallel hardware, and a low latency memory interface. The latter encourages pipelined implementation and visible interlocks. To compare the merits of these approaches, algorithms frequently encountered in numerical and symbolic computation were hand-coded for each architecture. Timings were done in simulators and the results were evaluated to determine the speed of each design. Based on these measurements, conclusions were drawn as to which aspects of each architecture are suitable for a high- performance computer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In real world applications sequential algorithms of data mining and data exploration are often unsuitable for datasets with enormous size, high-dimensionality and complex data structure. Grid computing promises unprecedented opportunities for unlimited computing and storage resources. In this context there is the necessity to develop high performance distributed data mining algorithms. However, the computational complexity of the problem and the large amount of data to be explored often make the design of large scale applications particularly challenging. In this paper we present the first distributed formulation of a frequent subgraph mining algorithm for discriminative fragments of molecular compounds. Two distributed approaches have been developed and compared on the well known National Cancer Institute’s HIV-screening dataset. We present experimental results on a small-scale computing environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Frequent pattern discovery in structured data is receiving an increasing attention in many application areas of sciences. However, the computational complexity and the large amount of data to be explored often make the sequential algorithms unsuitable. In this context high performance distributed computing becomes a very interesting and promising approach. In this paper we present a parallel formulation of the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The application is characterized by a highly irregular tree-structured computation. No estimation is available for task workloads, which show a power-law distribution in a wide range. The proposed approach allows dynamic resource aggregation and provides fault and latency tolerance. These features make the distributed application suitable for multi-domain heterogeneous environments, such as computational Grids. The distributed application has been evaluated on the well known National Cancer Institute’s HIV-screening dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the Biodiversity World (BDW) project we have created a flexible and extensible Web Services-based Grid environment for biodiversity researchers to solve problems in biodiversity and analyse biodiversity patterns. In this environment, heterogeneous and globally distributed biodiversity-related resources such as data sets and analytical tools are made available to be accessed and assembled by users into workflows to perform complex scientific experiments. One such experiment is bioclimatic modelling of the geographical distribution of individual species using climate variables in order to predict past and future climate-related changes in species distribution. Data sources and analytical tools required for such analysis of species distribution are widely dispersed, available on heterogeneous platforms, present data in different formats and lack interoperability. The BDW system brings all these disparate units together so that the user can combine tools with little thought as to their availability, data formats and interoperability. The current Web Servicesbased Grid environment enables execution of the BDW workflow tasks in remote nodes but with a limited scope. The next step in the evolution of the BDW architecture is to enable workflow tasks to utilise computational resources available within and outside the BDW domain. We describe the present BDW architecture and its transition to a new framework which provides a distributed computational environment for mapping and executing workflows in addition to bringing together heterogeneous resources and analytical tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The introduction of non-toxic fluride compounds as direct replacements for Thorium Fluoride (ThF4) has renewed interest in the use of low index fluoride compounds in high performance infrared filters. This paper reports the results of an investigation into the effects of combining these low index materials, particularly Barium Fluoride (BaF2), with the high index material Lead Telluride (PbTe) in bandpass and edge filters. Infrared filter designs using conventional and the new material ombination are compared, and infrared filters using these material combinations have been manufactured and have been shown to suffer problems with residual stress. A possible solution to this problem utilising Zinc Sulphide (ZnS) layers with compensating compressive stress is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method has been developed to estimate aerosol optical depth (AOD) over land surfaces using high spatial resolution, hyperspectral, and multiangle Compact High Resolution Imaging Spectrometer (CHRIS)/Project for On Board Autonomy (PROBA) images. The CHRIS instrument is mounted aboard the PROBA satellite and provides up to 62 bands. The PROBA satellite allows pointing to obtain imagery from five different view angles within a short time interval. The method uses inversion of a coupled surface/atmosphere radiative transfer model and includes a general physical model of angular surface reflectance. An iterative process is used to determine the optimum value providing the best fit of the corrected reflectance values for a number of view angles and wavelengths with those provided by the physical model. This method has previously been demonstrated on data from the Advanced Along-Track Scanning Radiometer and is extended here to the spectral and angular sampling of CHRIS/PROBA. The values obtained from these observations are validated using ground-based sun-photometer measurements. Results from 22 image sets show an rms error of 0.11 in AOD at 550 nm, which is reduced to 0.06 after an automatic screening procedure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the emerging prevalence of smart phones and 4G LTE networks, the demand for faster-better-cheaper mobile services anytime and anywhere is ever growing. The Dynamic Network Optimization (DNO) concept emerged as a solution that optimally and continuously tunes the network settings, in response to varying network conditions and subscriber needs. Yet, the DNO realization is still at infancy, largely hindered by the bottleneck of the lengthy optimization runtime. This paper presents the design and prototype of a novel cloud based parallel solution that further enhances the scalability of our prior work on various parallel solutions that accelerate network optimization algorithms. The solution aims to satisfy the high performance required by DNO, preliminarily on a sub-hourly basis. The paper subsequently visualizes a design and a full cycle of a DNO system. A set of potential solutions to large network and real-time DNO are also proposed. Overall, this work creates a breakthrough towards the realization of DNO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Here, we described the expression and characterization of the recombinant toxin LTx2, which was previously isolated from the venomous cDNA library of a Brazilian spider, Lasiodora sp. (Mygalomorphae, Theraphosidae). The recombinant toxin found in the soluble and insoluble fractions was purified by reverse phase high-performance liquid chromatography (HPLC). Ca2+ imaging analysis revealed that the recombinant LTx2 acts on calcium channels of BC3H1 cells, blocking L-type calcium channels. (C) 2008 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Very large scale computations are now becoming routinely used as a methodology to undertake scientific research. In this context, `provenance systems' are regarded as the equivalent of the scientist's logbook for in silico experimentation: provenance captures the documentation of the process that led to some result. Using a protein compressibility analysis application, we derive a set of generic use cases for a provenance system. In order to support these, we address the following fundamental questions: what is provenance? how to record it? what is the performance impact for grid execution? what is the performance of reasoning? In doing so, we define a technologyindependent notion of provenance that captures interactions between components, internal component information and grouping of interactions, so as to allow us to analyse and reason about the execution of scientific processes. In order to support persistent provenance in heterogeneous applications, we introduce a separate provenance store, in which provenance documentation can be stored, archived and queried independently of the technology used to run the application. Through a series of practical tests, we evaluate the performance impact of such a provenance system. In summary, we demonstrate that provenance recording overhead of our prototype system remains under 10% of execution time, and we show that the recorded information successfully supports our use cases in a performant manner.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este estudo teve como objetivo verificar até que ponto o processo de descentralização adotado pelo Departamento de Polícia Técnica da Bahia (DPT-BA) foi eficiente no atendimento às demandas de perícias de Computação Forense geradas pelas Coordenadorias Regionais de Polícia Técnica (CRPTs) do interior do Estado. O DPT-BA foi reestruturado obedecendo aos princípios da descentralização administrativa, seguindo a corrente progressista. Assumiu, com a descentralização, o compromisso de coordenar ações para dar autonomia às unidades do interior do Estado, com a criação de estruturas mínimas em todas as esferas envolvidas, com ampla capacidade de articulação entre si e com prestação de serviços voltados para um modelo de organização pública de alto desempenho. Ao abordar a relação existente entre a descentralização e a eficiência no atendimento à demanda de perícias oriundas do interior do estado da Bahia, o estudo, por limitações instrumentais, se manteve adstrito ao campo das perícias de Computação Forense, que reflete e ilustra, de forma expressiva, o cenário ocorrido nas demais áreas periciais. Inicialmente foram identificadas as abordagens teóricas sobre descentralização, evidenciando as distintas dimensões do conceito, e, em seguida, sobre a Computação Forense. Foram realizadas pesquisa documental no Instituto de Criminalística Afrânio Peixoto (Icap) e pesquisa de campo por meio de entrevistas semiestruturadas com juízes de direito lotados nas varas criminais de comarcas relacionadas ao cenário de pesquisa e com peritos criminais das Coordenações Regionais, das CRPTs e da Coordenação de Computação Forense do Icap. Correlacionando os prazos de atendimento que contemplam o conceito de eficiência  definido pelos juízes de direito entrevistados, clientes finais do trabalho pericial  e os prazos reais  obtidos mediante a pesquisa documental  os dados revelaram alto grau de ineficiência, morosidade e inadimplência, além de realidades discrepantes entre capital e interior. A análise das entrevistas realizadas com os peritos criminais revelou um cenário de insatisfação e desmotivação generalizadas, com a centralização quase absoluta do poder decisório, demonstrando que o processo de descentralização praticado serviu, paradoxalmente, como uma ferramenta de viabilização e camuflagem da centralização.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Manufacturing strategy has been widely studied and it is increasingly gaining attention. It has a fundamental role that is to translate the business strategy to the operations by developing the capabilities that are needed by the company in order to accomplish the desired performance. More precisely, manufacturing strategy comprises the decisions that managers take during a certain period of time in order to achieve a desire result. These decisions are related to which operational practices and resources are implemented. Our goal was to identify the relationship between these two decisions with operational performance. We based our arguments on the resource-based view for identifying sources of competitive advantage. Hence, we argued that operational practices and resources affect positively the operational performances. Additionally, we proposed that in the presence of some resources the implementation of operational practices would lead to a greater performance. We used previous scales for measuring operational practices and performance, and developed new constructs for resources. The data used is part of the High Performance Manufacturing project and the sample is composed by 291 plants. Through confirmatory factor analysis and multiple regressions we found that operational practices to a certain extant are positively related to operational performance. More specifically, the results show that JIT and customer orientation practices have a positive relationship with quality, delivery, flexibility, and cost performances. Moreover, we found that resources like technology and people explain a great variance of operational performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este estudo buscou verificar a influencia dos agentes da cadeia de suprimentos no desempenho do desenvolvimento de novos produtos quando os agentes são analisados em conjunto. A motivação desta pesquisa veio de estudos que alertaram para a consideração da integração da cadeia de suprimentos como um constructo multidimensional, englobando o envolvimento da manufatura, fornecedores e clientes no desenvolvimento de novos produtos; e devido à falta de informação sobre as influencias individuais destes agentes no desenvolvimento de novos produtos. Sob essas considerações, buscou-se construir um modelo analítico baseado na Teoria do Capital Social e Capacidade Absortiva, construir hipóteses a partir da revisão da literatura e conectar constructos como cooperação, envolvimento do fornecedor no desenvolvimento de novos produtos (DNP), envolvimento do cliente no DNP, envolvimento da manufatura no DNP, antecipação de novas tecnologias, melhoria contínua, desempenho operacional do DNP, desempenho de mercado do NPD e desempenho de negócio do DNP. Para testar as hipóteses foram consideradas três variáveis moderadoras, tais como turbulência ambiental (baixa, média e alta), indústria (eletrônicos, maquinários e equipamentos de transporte) e localização (América, Europa e Ásia). Para testar o modelo foram usados dados do projeto High Performance Manufacturing que contém 339 empresas das indústrias de eletrônicos, maquinários e equipamentos de transporte, localizadas em onze países. As hipóteses foram testadas por meio da Análise Fatorial Confirmatória (AFC) incluindo a moderação muti-grupo para as três variáveis moderadoras mencionadas anteriormente. Os principais resultados apontaram que as hipóteses relacionadas com cooperação foram confirmadas em ambientes de média turbulência, enquanto as hipóteses relacionadas ao desempenho no DNP foram confirmadas em ambientes de baixa turbulência ambiental e em países asiáticos. Adicionalmente, sob as mesmas condições, fornecedores, clientes e manufatura influenciam diferentemente no desempenho de novos produtos. Assim, o envolvimento de fornecedores influencia diretamente no desempenho operacional e indiretamente no desempenho de mercado e de negócio em baixos níveis de turbulência ambiental, na indústria de equipamentos de transporte em países da Americanos e Europeus. De igual forma, o envolvimento do cliente influenciou diretamente no desempenho operacional e indiretamente no desempenho de mercado e do negócio em médio nível de turbulência ambiental, na indústria de maquinários e em países Asiáticos. Fornecedores e clientes não influenciam diretamente no desempenho de mercado e do negócio e não influenciam indiretamente no desempenho operacional. O envolvimento da manufatura não influenciou nenhum tipo de desempenho do desenvolvimento de novos produtos em todos os cenários testados.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)