969 resultados para implementation method
Resumo:
Although the Navigation Satellite Timing and Ranging (NAVSTAR) Global Positioning System (GPS) is, de facto, the standard positioning system used in outdoor navigation, it does not provide, per se, all the features required to perform many outdoor navigational tasks. The accuracy of the GPS measurements is the most critical issue. The quest for higher position readings accuracy led to the development, in the late nineties, of the Differential Global Positioning System (DGPS). The differential GPS method detects the range errors of the GPS satellites received and broadcasts them. The DGPS/GPS receivers correlate the DGPS data with the GPS satellite data they are receiving, granting users increased accuracy. DGPS data is broadcasted using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to have access, within the ISEP campus, to DGPS correction data. To achieve this objective we designed and implemented a distributed system composed of two main modules which are interconnected: a distributed application responsible for the establishment of the data link over the Internet between the remote DGPS stations and the campus, and the campus-wide DGPS data server application. The DGPS data Internet link is provided by a two-tier client/server distributed application where the server-side is connected to the DGPS station and the client-side is located at the campus. The second unit, the campus DGPS data server application, diffuses DGPS data received at the campus via the Intranet and via a wireless data link. The wireless broadcast is intended for DGPS/GPS portable receivers equipped with an air interface and the Intranet link is provided for DGPS/GPS receivers with just a RS232 DGPS data interface. While the DGPS data Internet link servers receive the DGPS data from the DGPS base stations and forward it to the DGPS data Internet link client, the DGPS data Internet link client outputs the received DGPS data to the campus DGPS data server application. The distributed system is expected to provide adequate support for accurate (sub-metric) outdoor campus navigation tasks. This paper describes in detail the overall distributed application.
The impact of the implementation of the Basel III recommendations on the capital of portuguese banks
Resumo:
This paper analyses the impact of the implementation of the Basel III recommendations, using the standard method, in Portugal. For our study, we used the annual reports of 31st of December of 2012, and found out that out of the fourteen banks that published annual reports, only six satisfied the minimum ratios laid out by BCBS. Till 2012, Portuguese banks used an internal ratings method based on the Basel II recommendations known as notice 6/2010 of the Portuguese central bank, Banco de Portugal. As the implementation of the recommendations of Basel III in the EU via the Credit Review Directive IV is scheduled for 2014 and later years, Portuguese banks may severely contract credit upon implementation, as that is the easiest, fastest and cheapest way for banks to satisfy the minimum ratio requirements as compared to an increase of capital or credit spreads.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.
Resumo:
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
Resumo:
Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.
Resumo:
A fourteen year schistosomiasis control program in Peri-Peri (Capim Branco, MG) reduced prevalence from 43.5 to 4.4%; incidence from 19.0 to 2.9%, the geometric mean of the number of eggs from 281 to 87 and the level of the hepatoesplenic form cases from 5.9 to 0.0%. In 1991, three years after the interruption of the program, the prevalence had risen to 19.6%. The district consists of Barbosa (a rural area) and Peri-Peri itself (an urban area). In 1991, the prevalence in the two areas was 28.4% and 16.0% respectively. A multivariate analysis of risk factors for schistosomiasis indicated the domestic agricultural activity with population attributive risk (PAR) of 29.82%, the distance (< 10 m) from home to water source (PAR = 25.93%) and weekly fishing (PAR = 17.21%) as being responsible for infections in the rural area. The recommended control measures for this area are non-manual irrigation and removal of homes to more than ten meters from irrigation ditches. In the urban area, it was observed that swimming at weekly intervals (PAR = 20.71%), daily domestic agricultural activity (PAR = 4.07%) and the absence of drinking water in the home (PAR=4.29%) were responsible for infections. Thus, in the urban area the recommended control measures are the substitution of manual irrigation with an irrigation method that avoids contact with water, the creation of leisure options of the population and the provision of a domestic water supply. The authors call attention to the need for the efficacy of multivariate analysis of risk factors to be evaluated for schistosomiasis prior to its large scale use as a indicator of the control measures to be implemented.
Resumo:
Endmember extraction (EE) is a fundamental and crucial task in hyperspectral unmixing. Among other methods vertex component analysis ( VCA) has become a very popular and useful tool to unmix hyperspectral data. VCA is a geometrical based method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Many Hyperspectral imagery applications require a response in real time or near-real time. Thus, to met this requirement this paper proposes a parallel implementation of VCA developed for graphics processing units. The impact on the complexity and on the accuracy of the proposed parallel implementation of VCA is examined using both simulated and real hyperspectral datasets.
Resumo:
One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.
Resumo:
Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance's physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation.
Resumo:
RESUMO: Antecendentes: Uma avaliação dos serviços de abuso de substâncias em Barbados identificou a necessidade de programas e serviços que são projetados especificamente para crianças e adolescentes. Objetivo: Realizar programa com base em evidências para reduzir a incidência de abuso de drogas entre crianças e adolescentes por meio do fortalecimento da unidade familiar através de parentalidade positiva, de maior funcionamento familiar e de resistência dos jovens. Método: Dois projetos-piloto foram realizadas com base no programa "Fortalecer as Famílias para Pais e Jovens de 12 a 16 anos (SFPY). O programa de nove semanas foi empregado como uma intervenção para criar laços familiares mais fortes, aumentar a resistência dos jovens e reduzir o abuso de drogas entre crianças e adolescentes de idades de 11 a 16 anos. A decisão foi tomada para incluir participantes de 11 anos desde que as crianças possam estar no primeiro ano da escola secundária nessa idade. IMPLEMENTATION OF SUBSTANCE ABUSE PILOT PROJECT FOR CHILDREN AND ADOLESCENTS 5 Resultados: Quinze famílias participaram em dois projetos-piloto e a avaliação final mostrou que os jovens após o programa, geralmente tornaram-se mais positivos sobre o seu lugar na unidade familiar e sentiram que sua participação no programa foi benéfica. Os pais, da mesma forma, relataram que eles conquistaram, com o programa uma relação mais positiva, uma melhor compreensão das necessidades, e consciência das mudanças de desenvolvimento de seus jovens. Desta forma, considera-se que o programa atingiu o resultado desejado de criar unidades familiares mais fortes. Conclusão: O Projeto Piloto “SFPY” foi bem sucedido em fazer pais e jovens mais conscientes de suas necessidades individuais e de responsabilidades dentro da unidade familiar. Como resultado, o relacionamentos das respectivas famílias melhorou. Estudos baseados em evidências têm demonstrado que um relação familiar mais forte diminui a incidência de uso e abuso de drogas na população adolescente, aumentando os fatores de proteção e diminuindo os fatores de risco. A implementação do programa, que foi desenvolvido e testado no ambiente norte-americano, demonstrou que era transferível para a sociedade de Barbados. No entanto, seu impacto total só pode ser determinado através de um estudo comparativo envolvendo um grupo de controle e / ou uma intervenção alternativa ao abuso de substâncias. Portanto, é recomendável que um estudo comparativo da intervenção SFPY deve envolver uma amostra representativa de adolescentes que estão em estágio de desenvolvimento anterior mais cedo. Evidências já demonstram que o programa é mais eficaz, com impacto mais longo sobre os jovens que participam em uma idade maisABSTRACT:Background: An evaluation of substance abuse services in Barbados has identified the need for programmes and services that are specifically designed for children and adolescents. Aim: To conduct an evidence-based programme to reduce the incidence of substance abuse among children and adolescents by strengthening the family unit through positive parenting, enhanced family functioning and youth resilience. Method: Two pilot projects were conducted based on the ‘Strengthening Families for Parents and Youths 12– 16’ (SFPY) programme. The nine-week programme was employed as an intervention to create stronger family connections, increase youth resiliency and reduce drug abuse among children and adolescents between the ages of 11 to 16. The decision was made to include participants from age 11 since children may be in the first year of secondary school at this age. IMPLEMENTATION OF SUBSTANCE ABUSE PILOT PROJECT FOR CHILDREN AND ADOLESCENTS 3 Results: Fifteen families participated in two pilot projects and an evaluation conducted at the conclusion showed that the youth were generally more positive about their perceived place in the family unit and felt that the being in the programme was generally beneficial. The parents similarly reported they had a more positive relationship with their youths and also had a better understanding of their needs, and an awareness of their developmental changes. This affirmed that the programme had achieved its desired outcome to create stronger family units. Conclusion: The SFPY Pilot Project was successful in making parents and youths more aware of their individual needs and responsibilities within the family unit. As a result relationships within their respective families were strengthened. Evidence-based studies have shown that enhanced family functioning decreases the incidence of substance use and abuse in the adolescent population by increasing protective factors and decreasing risk factors. The implementation of the programme, which was developed and tested in the North American environment, demonstrated that it was transferable to the Barbadian society. However, its full impact can only be determined through a comparative study involving a control group and/or an alternative substance abuse intervention. It is therefore recommended that a comparative study of the SFPY intervention should be delivered to a representative sample of adolescents who are at an earlier developmental stage. Evidence has shown that the programme is more effective, with longer impact on youths who participate at a younger age.
Resumo:
The Electromagnetism-like (EM) algorithm is a population- based stochastic global optimization algorithm that uses an attraction- repulsion mechanism to move sample points towards the optimal. In this paper, an implementation of the EM algorithm in the Matlab en- vironment as a useful function for practitioners and for those who want to experiment a new global optimization solver is proposed. A set of benchmark problems are solved in order to evaluate the performance of the implemented method when compared with other stochastic methods available in the Matlab environment. The results con rm that our imple- mentation is a competitive alternative both in term of numerical results and performance. Finally, a case study based on a parameter estimation problem of a biology system shows that the EM implementation could be applied with promising results in the control optimization area.
Resumo:
The article provides a method for long-term forecast of frame alignment losses based on the bit-error rate monitoring for structure-agnostic circuit emulation service over Ethernet in a mobile backhaul network. The developed method with corresponding algorithm allows to detect instants of probable frame alignment losses in a long term perspective in order to give engineering personnel extra time to take some measures aimed at losses prevention. Moreover, long-term forecast of frame alignment losses allows to make a decision about the volume of TDM data encapsulated into a circuit emulation frame in order to increase utilization of the emulated circuit. The developed long-term forecast method formalized with the corresponding algorithm is recognized as cognitive and can act as a part of network predictive monitoring system.
Resumo:
MOTIVATION: Microarray results accumulated in public repositories are widely reused in meta-analytical studies and secondary databases. The quality of the data obtained with this technology varies from experiment to experiment, and an efficient method for quality assessment is necessary to ensure their reliability. RESULTS: The lack of a good benchmark has hampered evaluation of existing methods for quality control. In this study, we propose a new independent quality metric that is based on evolutionary conservation of expression profiles. We show, using 11 large organ-specific datasets, that IQRray, a new quality metrics developed by us, exhibits the highest correlation with this reference metric, among 14 metrics tested. IQRray outperforms other methods in identification of poor quality arrays in datasets composed of arrays from many independent experiments. In contrast, the performance of methods designed for detecting outliers in a single experiment like Normalized Unscaled Standard Error and Relative Log Expression was low because of the inability of these methods to detect datasets containing only low-quality arrays and because the scores cannot be directly compared between experiments. AVAILABILITY AND IMPLEMENTATION: The R implementation of IQRray is available at: ftp://lausanne.isb-sib.ch/pub/databases/Bgee/general/IQRray.R. CONTACT: Marta.Rosikiewicz@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Resumo:
We present a method for segmenting white matter tracts from high angular resolution diffusion MR. images by representing the data in a 5 dimensional space of position and orientation. Whereas crossing fiber tracts cannot be separated in 3D position space, they clearly disentangle in 5D position-orientation space. The segmentation is done using a 5D level set method applied to hyper-surfaces evolving in 5D position-orientation space. In this paper we present a methodology for constructing the position-orientation space. We then show how to implement the standard level set method in such a non-Euclidean high dimensional space. The level set theory is basically defined for N-dimensions but there are several practical implementation details to consider, such as mean curvature. Finally, we will show results from a synthetic model and a few preliminary results on real data of a human brain acquired by high angular resolution diffusion MRI.