990 resultados para Domain elimination method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract not available

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem for mapping meshes onto parallel computers. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut-edge weight in the graph with the aim of minimising the parallel communication overhead. However it has been shown that for certain classes of problem, the convergence of the underlying solution algorithm is strongly influenced by the shape or aspect ratio of the subdomains. In this paper therefore, we modify the multilevel algorithms in order to optimise a cost function based on aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract not available

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ethernet connections, which are widely used in many computer networks, can suffer from electromagnetic interference. Typically, a degradation of the data transmission rate can be perceived as electromagnetic disturbances lead to corruption of data frames on the network media. In this paper a software-based measuring method is presented, which allows a direct assessment of the effects on the link layer. The results can directly be linked to the physical interaction without the influence of software related effects on higher protocol layers. This gives a simple tool for a quantitative analysis of the disturbance of an Ethernet connection based on time domain data. An example is shown, how the data can be used for further investigation of mechanisms and detection of intentional electromagnetic attacks. © 2015 Author(s).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the behavior of solutions of the Poisson equation with homogeneous Neumann boundary conditions in a two-dimensional thin domain which presents locally periodic oscillations at the boundary. The oscillations are such that both the amplitude and period of the oscillations may vary in space. We obtain the homogenized limit problem and a corrector result by extending the unfolding operator method to the case of locally periodic media.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple method has been recently proposed to assess acute hydration status in humans; however, several questions remain regarding its reliability, validity, and practicality. Objective: Establish reliability of a simple method to assess euhydration, that is, to analyze whether this method can be used as a consistent indicator of a person´s hydration status. In addition, the study sought to assess the effect exercise has on urine volume when euhydration is maintained and a standardized volume of water is ingested. Methods: Five healthy physically active men and five healthy physically active women, 22.5 ± 2.3 years of age (mean ± standard deviation) reported to the laboratory after fasting for 10 hours or more on three occasions, each one week apart. During the two identical resting euhydration conditions (EuA and EuB), participants remained seated for 45 minutes. During the exercise condition (EuExer), participants exercised intermittently in an environmental chamber (average temperature and relative humidity = 32 ± 3°C and 65 ± 7%, respectively) for a period of 45 minutes and drank water to offset loss due to sweating. The order of treatments was randomized. Upon finishing the treatment period, they ingested a volume of water equivalent to 1.43% body mass (BM) within 30 minutes. Urine was collected and measured henceforth every 30 minutes for 3 hours. Results: Urine volume eliminated during EuExer (1205 ± 399.5 ml) was not different from EuB (1072.2±413.1 ml) or EuA (1068 ± 382.87 ml) (p-value = 0.44). Both resting conditions were practically identical (p-value = 0.98) and presented a strong intraclass correlation (r = 0.849, p-value = 0.001). Conclusions: This method, besides simple, proved to be consistent in all conditions; therefore, it can be used with the certainty that measurements are valid and reliable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extracting knowledge from the transaction records and the personal data of credit card holders has great profit potential for the banking industry. The challenge is to detect/predict bankrupts and to keep and recruit the profitable customers. However, grouping and targeting credit card customers by traditional data-driven mining often does not directly meet the needs of the banking industry, because data-driven mining automatically generates classification outputs that are imprecise, meaningless, and beyond users' control. In this paper, we provide a novel domain-driven classification method that takes advantage of multiple criteria and multiple constraint-level programming for intelligent credit scoring. The method involves credit scoring to produce a set of customers' scores that allows the classification results actionable and controllable by human interaction during the scoring process. Domain knowledge and experts' experience parameters are built into the criteria and constraint functions of mathematical programming and the human and machine conversation is employed to generate an efficient and precise solution. Experiments based on various data sets validated the effectiveness and efficiency of the proposed methods. © 2006 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complex architecture of many fibre-reinforced composites makes the generation of finite element meshes a labour-intensive process. The embedded element method, which allows the matrix and fibre reinforcement to be meshed separately, offers a computationally efficient approach to reduce the time and cost of meshing. In this paper we present a new approach of introducing cohesive elements into the matrix domain to enable the prediction of matrix cracking using the embedded element method. To validate this approach, experiments were carried out using a modified Double Cantilever Beam with ply drops, with the results being compared with model predictions. Crack deflection was observed at the ply drop region, due to the differences in stiffness, strength and toughness at the bi-material interface. The new modelling technique yields accurate predictions of the failure process in composites, including fracture loads and crack deflection path.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroimaging studies have shown neuromuscular electrical stimulation (NMES)-evoked movements activate regions of the cortical sensorimotor network, including the primary sensorimotor cortex (SMC), premotor cortex (PMC), supplementary motor area (SMA), and secondary somatosensory area (S2), as well as regions of the prefrontal cortex (PFC) known to be involved in pain processing. The aim of this study, on nine healthy subjects, was to compare the cortical network activation profile and pain ratings during NMES of the right forearm wrist extensor muscles at increasing current intensities up to and slightly over the individual maximal tolerated intensity (MTI), and with reference to voluntary (VOL) wrist extension movements. By exploiting the capability of the multi-channel time domain functional near-infrared spectroscopy technique to relate depth information to the photon time-of-flight, the cortical and superficial oxygenated (O2Hb) and deoxygenated (HHb) hemoglobin concentrations were estimated. The O2Hb and HHb maps obtained using the General Linear Model (NIRS-SPM) analysis method, showed that the VOL and NMES-evoked movements significantly increased activation (i.e., increase in O2Hb and corresponding decrease in HHb) in the cortical layer of the contralateral sensorimotor network (SMC, PMC/SMA, and S2). However, the level and area of contralateral sensorimotor network (including PFC) activation was significantly greater for NMES than VOL. Furthermore, there was greater bilateral sensorimotor network activation with the high NMES current intensities which corresponded with increased pain ratings. In conclusion, our findings suggest that greater bilateral sensorimotor network activation profile with high NMES current intensities could be in part attributable to increased attentional/pain processing and to increased bilateral sensorimotor integration in these cortical regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce Neural Choice by Elimination, a new framework that integrates deep neural networks into probabilistic sequential choice models for learning to rank. Given a set of items to chose from, the elimination strategy starts with the whole item set and iteratively eliminates the least worthy item in the remaining subset. We prove that the choice by elimination is equivalent to marginalizing out the random Gompertz latent utilities. Coupled with the choice model is the recently introduced Neural Highway Networks for approximating arbitrarily complex rank functions. We evaluate the proposed framework on a large-scale public dataset with over 425K items, drawn from the Yahoo! learning to rank challenge. It is demonstrated that the proposed method is competitive against state-of-the-art learning to rank methods.