69 resultados para implementation method
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Several antineoplasic drugs have been demonstrated to be carcinogenic or to have mutagenic and teratogenic effects. The greatest protection is achieved with the implementation of administrative and engineering controls and safety procedures. Objective: to evaluate the improvements on pharmacy technicians' work practices, after the implementation of operational procedures related to individual protection, biologic safety cabinet disinfection and cytotoxic drug preparation. Method: case-study in a hospital pharmacy undergoing a certification process. Six pharmacy technicians were observed during their daily activities. Characterization of the work practices was made using a checklist based on ISOPP and PIC guidelines. The variables studied concerning cleaning/disinfection procedures, personal protective equipment and procedures for preparing cytotoxic drugs. The same work practices were evaluated after four months of operational procedures implementation. Concordance between work practices and guidelines was considered to be a quality indicator (guidelines concordance practices number/total number of practices x 100). Results: improvements were observed after operational procedures implementation. An improvement of 6,25% in personal protective equipment practice was achieved by changing second pair of gloves every thirty minutes. The major progress, 10%, was obtained in disinfection procedure, where 80% of tasks are now realized according to guidelines.By now, we hot an improvement of only 1% at drug preparation procedure by placing one cytotoxic drug at a time inside the biological safety cabinet. Then, 85% of practices are according to guidelines. Conclusion: before operational procedures implementation 80,3% of practices were according to the guidelines, while now is 84,4%. This indicates that is necessary to review the procedures frequently in the benefit to reduce the risks associated with handling cytotoxic drugs and maintenance of drug specifications.
Resumo:
International Conference with Peer Review 2012 IEEE International Conference in Geoscience and Remote Sensing Symposium (IGARSS), 22-27 July 2012, Munich, Germany
Resumo:
Brain dopamine transporters imaging by Single Emission Tomography (SPECT) with 123I-FP-CIT (DaTScanTM) has become an important tool in the diagnosis and evaluation of Parkinson syndromes.This diagnostic method allows the visualization of a portion of the striatum – where healthy pattern resemble two symmetric commas - allowing the evaluation of dopamine presynaptic system, in which dopamine transporters are responsible for dopamine release into the synaptic cleft, and their reabsorption into the nigrostriatal nerve terminals, in order to be stored or degraded. In daily practice for assessment of DaTScan TM, it is common to rely only on visual assessment for diagnosis. However, this process is complex and subjective as it depends on the observer’s experience and it is associated with high variability intra and inter observer. Studies have shown that semiquantification can improve the diagnosis of Parkinson syndromes. For semiquantification, analysis methods of image segmentation using regions of interest (ROI) are necessary. ROIs are drawn, in specific - striatum - and in nonspecific – background – uptake areas. Subsequently, specific binding ratios are calculated. Low adherence of semiquantification for diagnosis of Parkinson syndromes is related, not only with the associated time spent, but also with the need of an adapted database of reference values for the population concerned, as well as, the examination of each service protocol. Studies have concluded, that this process increases the reproducibility of semiquantification. The aim of this investigation was to create and validate a database of healthy controls for Dopamine transporters with DaTScanTM named DBRV. The created database has been adapted to the Nuclear Medicine Department’s protocol, and the population of Infanta Cristina’s Hospital located in Badajoz, Spain.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.
Resumo:
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
Resumo:
Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.
Resumo:
Endmember extraction (EE) is a fundamental and crucial task in hyperspectral unmixing. Among other methods vertex component analysis ( VCA) has become a very popular and useful tool to unmix hyperspectral data. VCA is a geometrical based method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Many Hyperspectral imagery applications require a response in real time or near-real time. Thus, to met this requirement this paper proposes a parallel implementation of VCA developed for graphics processing units. The impact on the complexity and on the accuracy of the proposed parallel implementation of VCA is examined using both simulated and real hyperspectral datasets.
Resumo:
One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.
Resumo:
Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance's physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation.
Resumo:
A previously developed model is used to numerically simulate real clinical cases of the surgical correction of scoliosis. This model consists of one-dimensional finite elements with spatial deformation in which (i) the column is represented by its axis; (ii) the vertebrae are assumed to be rigid; and (iii) the deformability of the column is concentrated in springs that connect the successive rigid elements. The metallic rods used for the surgical correction are modeled by beam elements with linear elastic behavior. To obtain the forces at the connections between the metallic rods and the vertebrae geometrically, non-linear finite element analyses are performed. The tightening sequence determines the magnitude of the forces applied to the patient column, and it is desirable to keep those forces as small as possible. In this study, a Genetic Algorithm optimization is applied to this model in order to determine the sequence that minimizes the corrective forces applied during the surgery. This amounts to find the optimal permutation of integers 1, ... , n, n being the number of vertebrae involved. As such, we are faced with a combinatorial optimization problem isomorph to the Traveling Salesman Problem. The fitness evaluation requires one computing intensive Finite Element Analysis per candidate solution and, thus, a parallel implementation of the Genetic Algorithm is developed.
Resumo:
A actividade de construção civil é responsável por grande parte dos resíduos produzidos, nomeadamente em obras de construção, demolições de edifícios ou derrocadas, operações de manutenção, restauro, remodelação e reabilitação de construções. A gestão dos resíduos deste sector, abreviadamente designada por resíduos de construção e demolição (RCD), passou a estar regulada, através de regime de operações de gestão de RCD. Este diploma, define entre outras, a responsabilidade dos vários intervenientes no processo de gestão de resíduos, fase de projecto, execução, transporte e recepção. Com a evolução das preocupações ambientais da população e maior envolvência das empresas na contribuição para uma gestão integrada de resíduos, existe um crescente desenvolvimento de estudos no âmbito de caracterização de quantidades e tipos de resíduos produzidos pelo sector. Neste contexto, e por ser importante uma economia integrada com a gestão de resíduos, os principais desafios passam pelo planeamento e preparação de Obra desde da fase de projecto à fase de execução, com vista à prevenção, redução, reutilização e valorização dos RCD. O presente trabalho pretende contribuir para este desenvolvimento do sector, mais concretamente na obtenção de indicadores de resíduos de construção (RC), resíduos de demolição (RD) e caracterização da tipologia destes. Para tanto, foi feita uma avaliação dos estudos desenvolvidos no âmbito de caracterização dos tipos de resíduos e indicadores de RC e RD, como método comparativo. Os indicadores deste estudo foram obtidos com base na análise de dados de casos de estudo, no caso concreto RC, de obras de estruturas, e RD de edifícios com execução de demolição selectiva. Na parte final deste estudo apresentam-se algumas conclusões e recomendações.
Resumo:
Este trabalho tem como objectivo apresentar as ferramentas do Lean Thinking e realizar um estudo de caso numa organização em que este sistema é utilizado. Numa primeira fase do trabalho será feito uma análise bibliográfica sobre o ―Lean Thinking”, que consiste num sistema de negócios, uma forma de especificar valor e delinear a melhor sequência de acções que criam valor. Em seguida, será realizado um estudo de caso numa Empresa – Divisão de Motores – no ramo da aeronáutica com uma longa e conceituada tradição com o objectivo de reduzir o TAT (turnaround time – tempo de resposta), ou seja, o tempo desde a entrada de um motor na divisão até à entrega ao cliente. Primeiramente, analisando as falhas existentes em todo o processo do motor, isto é, a análise de tempos de reparação de peças à desmontagem do motor que têm que estar disponíveis à montagem do mesmo, peças que são requisitadas a outros departamentos da Empresa e as mesmas não estão disponíveis quando são precisas passando pelo layout da divisão. Por fim, fazer uma análise dos resultados até então alcançados na divisão de Motores e aplicar as ferramentas do ―Lean Thinking‖ com o objectivo da implementação. É importante referir que a implementação bem-sucedida requer, em primeiro lugar e acima de tudo, um firme compromisso da administração com uma completa adesão à cultura da procura e eliminação de desperdício. Para concluir o trabalho, destaca-se a importância deste sistema e quais são as melhorias que se podem conseguir com a sua implantação.
Resumo:
Este trabalho ocorre face à necessidade da empresa Helisuporte ter uma perspectiva a nível de fiabilidade das suas aeronaves. Para isso, foram traçados como objectivos de estudo a criação de uma base de dados de anomalias; identificação de sistemas e componentes problemáticos; caracterização dos mesmos, avaliar a condição de falha e, com isto, apresentar soluções de controlo de anomalias. Assim, foi desenvolvida uma metodologia que proporciona tratamento de dados com recurso a uma análise não-paramétrica, tendo sido escolhida a estatística de amostra. Esta irá permitir a identificação dos sistemas problemáticos e seus componentes anómalos. Efectuado o tratamento de dados, passamos para a caracterização fiabilística desses componentes, assumindo o tempo de operação e a vida útil específica de cada um. Esta foi possível recorrendo ao cálculo do nível de fiabilidade, MTBF, MTBUR e taxa de avarias. De modo a identificar as diferentes anomalias e caracterizar o “know-how” da equipa de manutenção, implementou-se a análise de condição de falha, mais propriamente a análise dos modos e efeitos de falha. Tendo isso em atenção, foi construído um encadeamento lógico simples, claro e eficaz, face a uma frota complexa. Implementada essa metodologia e analisados os resultados podemos afirmar que os objectivos foram alcançados, concluindo-se que os valores de fiabilidade que caracterizam alguns dos componentes das aeronaves pertencentes à frota em estudo não correspondem ao esperado e idealizado como referência de desempenho dos mesmos. Assim, foram sugeridas alterações no manual de manutenção de forma a melhorar estes índices. Com isto conseguiu-se desenvolver, o que se poderá chamar de, “fiabilidade na óptica do utilizador”.
Resumo:
This paper is on the problem of short-term hydro scheduling, particularly concerning head-dependent reservoirs under competitive environment. We propose a new nonlinear optimization method to consider hydroelectric power generation as a function of water discharge and also of the head. Head-dependency is considered on short-term hydro scheduling in order to obtain more realistic and feasible results. The proposed method has been applied successfully to solve a case study based on one of the main Portuguese cascaded hydro systems, providing a higher profit at a negligible additional computation time in comparison with a linear optimization method that ignores head-dependency.