839 resultados para Polynomial Classifier
Resumo:
All-optical label swapping (AOLS) forms a key technology towards the implementation of all-optical packet switching nodes (AOPS) for the future optical Internet. The capital expenditures of the deployment of AOLS increases with the size of the label spaces (i.e. the number of used labels), since a special optical device is needed for each recognized label on every node. Label space sizes are affected by the way in which demands are routed. For instance, while shortest-path routing leads to the usage of fewer labels but high link utilization, minimum interference routing leads to the opposite. This paper studies all-optical label stacking (AOLStack), which is an extension of the AOLS architecture. AOLStack aims at reducing label spaces while easing the compromise with link utilization. In this paper, an integer lineal program is proposed with the objective of analyzing the softening of the aforementioned trade-off due to AOLStack. Furthermore, a heuristic aiming at finding good solutions in polynomial-time is proposed as well. Simulation results show that AOLStack either a) reduces the label spaces with a low increase in the link utilization or, similarly, b) uses better the residual bandwidth to decrease the number of labels even more
Resumo:
Remote sensing and geographical information technologies were used to discriminate areas of high and low risk for contracting kala-azar or visceral leishmaniasis. Satellite data were digitally processed to generate maps of land cover and spectral indices, such as the normalised difference vegetation index and wetness index. To map estimated vector abundance and indoor climate data, local polynomial interpolations were used based on the weightage values. Attribute layers were prepared based on illiteracy and the unemployed proportion of the population and associated with village boundaries. Pearson's correlation coefficient was used to estimate the relationship between environmental variables and disease incidence across the study area. The cell values for each input raster in the analysis were assigned values from the evaluation scale. Simple weighting/ratings based on the degree of favourable conditions for kala-azar transmission were used for all the variables, leading to geo-environmental risk model. Variables such as, land use/land cover, vegetation conditions, surface dampness, the indoor climate, illiteracy rates and the size of the unemployed population were considered for inclusion in the geo-environmental kala-azar risk model. The risk model was stratified into areas of "risk"and "non-risk"for the disease, based on calculation of risk indices. The described approach constitutes a promising tool for microlevel kala-azar surveillance and aids in directing control efforts.
Resumo:
BACKGROUND Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. METHODS It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. RESULTS Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. CONCLUSIONS All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).
Technologies de procédé et de contrôle pour réduire la teneur en sel du jambon sec et des saucissons
Resumo:
Dans certains pays européens, les produits carnés élaborés peuvent représenter près de 20% de la consommation journalière de sodium. De ce fait, les industries de la viande tentent de réduire la teneur en sel dans les produits carnés pour répondre, d’une part aux attentes des consommateurs et d’autre part aux demandes des autorités sanitaires. Le système Quick‐Dry‐Slice process (QDS®), couplé avec l’utilisation de sels substituant le chlorure de sodium (NaCl), a permis de fabriquer, avec succès, des saucisses fermentées à basse teneur en sel en réduisant le cycle de fabrication et sans ajout de NaCl supplémentaire. Les technologies de mesure en ligne non destructives, comme les rayons X et l’induction électromagnétique, permettent de classifier les jambons frais suivant leur teneur en gras, un paramètre crucial pour adapter la durée de l’étape de salaison. La technologie des rayons X peut aussi être utilisée pour estimer la quantité de sel incorporée pendant la salaison. L’information relative aux teneurs en sel et en gras est importante pour optimiser le processus d’élaboration du jambon sec en réduisant la variabilité de la teneur en sel entre les lots et dans un même lot, mais aussi pour réduire la teneur en sel du produit final. D’autres technologies comme la spectroscopie en proche infrarouge (NIRS) ou spectroscopie microondes sont aussi utiles pour contrôler le processus d’élaboration et pour caractériser et classifier les produits carnés élaborés, selon leur teneur en sel. La plupart de ces technologies peuvent être facilement appliquées en ligne dans l’industrie afin de contrôler le processus de fabrication et d’obtenir ainsi des produits carnés présentant les caractéristiques recherchées.
Resumo:
Exposure to fine particles and noise has been linked to cardiovascular diseases and elevated cardiovascular mortality affecting the worldwide population. Residence and/or work in proximity to emission sources as for example road traffic leads to an elevated exposure and a higher risk for adverse health effects. Highway maintenance workers spend most of their work time in traffic and are exposed regularly to particles and noise. The aims of this thesis were to provide a better understanding of the workers' mixed exposure to particles and noise and to assess cardiopulmonary short term health effects in relation to this exposure. Exposure and health data were collected in collaboration with 8 maintenance centers of the Swiss Road Maintenance Services located in the cantons Bern, Fribourg and Vaud in western Switzerland. Repeated measurements with 18 subjects were conducted during 50 non-consecutive work shifts between Mai 2010 and February 2012, equally distributed over all seasons. In the first part of this thesis we tested and validated measurements of ultrafine particles with a miniature diffusion size classifier (miniDiSC) - a novel particle counting device that was used for the exposure assessment during highway maintenance work. We found that particle numbers and average particle size measured by the miniDiSC were highly correlated with data from the P-TRAK, a condensation particle counter (CPC), as well as from a scanning mobility particle sizer (SMPS). However, the miniDiSC measured significantly more particles than the P-TRAK and significantly less than the SMPS in its full size range. Our data suggests that the instrument specific cutoffs were the main reason for the different particle counts. The first main objective of this thesis was to investigate the exposure of highway maintenance workers to air pollutants and noise, in relation to the different maintenance activities. We have seen that the workers are regularly exposed to high particle and noise levels. This was a consequence of close proximity to highway traffic and the use of motorized working equipment such as brush cutters, chain saws, generators and pneumatic hammers during which the highest exposure levels occurred. Although exposure to air pollutants were not critical if compared to occupational exposure limits, the elevated exposure to particles and noise may lead to a higher risk for cardiovascular diseases in this worker population. The second main objective was to investigate cardiopulmonary short-term health effects in relation to the particle and noise exposure during highway maintenance work. We observed a PM2.5 related increase of the acute-phase inflammation markers C-reactive protein and serum amyloid A and a decrease of TNFa. Heart rate variability increased as a consequence of particle as well as noise exposure. Increased high frequency power indicated a stronger parasympathetic influence on the heart. Elevated noise levels during recreational time, after work, were related to increased blood pressure. Our data confirmed that highway maintenance workers are exposed to elevated levels of particles and noise as compared to the average population. This exposure poses a cardiovascular health risk and it is therefore important to make efforts to better protect the workers health. The use of cleaner machines during maintenance work would be a major step to improve the workers' situation. Furthermore, regulatory policies with the aim of reducing combustion and non-combustion emissions from road traffic are important for the protection of workers in traffic environments and the entire population.
Resumo:
This contribution compares existing and newly developed techniques for geometrically representing mean-variances-kewness portfolio frontiers based on the rather widely adapted methodology of polynomial goal programming (PGP) on the one hand and the more recent approach based on the shortage function on the other hand. Moreover, we explain the working of these different methodologies in detail and provide graphical illustrations. Inspired by these illustrations, we prove a generalization of the well-known two fund separation theorem from traditionalmean-variance portfolio theory.
Resumo:
We are going to implement the "GA-SEFS" by Tsymbal and analyse experimentally its performance depending on the classifier algorithms used in the fitness function (NB, MNge, SMO). We are also going to study the effect of adding to the fitness function a measure to control complexity of the base classifiers.
Resumo:
OBJECTIVE: Previous research suggested that proper blood pressure (BP) management in acute stroke may need to take into account the underlying etiology. METHODS: All patients with acute ischemic stroke registered in the ASTRAL registry between 2003 and 2009 were analyzed. Unfavorable outcome was defined as modified Rankin Scale score >2. A local polynomial surface algorithm was used to assess the effect of baseline and 24- to 48-hour systolic BP (SBP) and mean arterial pressure (MAP) on outcome in patients with lacunar, atherosclerotic, and cardioembolic stroke. RESULTS: A total of 791 patients were included in the analysis. For lacunar and atherosclerotic strokes, there was no difference in the predicted probability of unfavorable outcome between patients with an admission BP of <140 mm Hg, 140-160 mm Hg, or >160 mm Hg (15.3 vs 12.1% vs 20.8%, respectively, for lacunar, p = 015; 41.0% vs 41.5% vs 45.5%, respectively, for atherosclerotic, p = 075), or between patients with BP increase vs decrease at 24-48 hours (18.7% vs 18.0%, respectively, for lacunar, p = 0.84; 43.4% vs 43.6%, respectively, for atherosclerotic, p = 0.88). For cardioembolic strokes, increase of BP at 24-48 hours was associated with higher probability of unfavorable outcome compared to BP reduction (53.4% vs 42.2%, respectively, p = 0.037). Also, the predicted probability of unfavorable outcome was significantly different between patients with an admission BP of <140 mm Hg, 140-160 mm Hg, and >160 mm Hg (34.8% vs 42.3% vs 52.4%, respectively, p < 0.01). CONCLUSIONS: This study provides evidence to support that BP management in acute stroke may have to be tailored with respect to the underlying etiopathogenetic mechanism.
Resumo:
A new ambulatory method of monitoring physical activities in Parkinson's disease (PD) patients is proposed based on a portable data-logger with three body-fixed inertial sensors. A group of ten PD patients treated with subthalamic nucleus deep brain stimulation (STN-DBS) and ten normal control subjects followed a protocol of typical daily activities and the whole period of the measurement was recorded by video. Walking periods were recognized using two sensors on shanks and lying periods were detected using a sensor on trunk. By calculating kinematics features of the trunk movements during the transitions between sitting and standing postures and using a statistical classifier, sit-to-stand (SiSt) and stand-to-sit (StSi) transitions were detected and separated from other body movements. Finally, a fuzzy classifier used this information to detect periods of sitting and standing. The proposed method showed a high sensitivity and specificity for the detection of basic body postures allocations: sitting, standing, lying, and walking periods, both in PD patients and healthy subjects. We found significant differences in parameters related to SiSt and StSi transitions between PD patients and controls and also between PD patients with and without STN-DBS turned on. We concluded that our method provides a simple, accurate, and effective means to objectively quantify physical activities in both normal and PD patients and may prove useful to assess the level of motor functions in the latter.
Resumo:
Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.
Resumo:
BACKGROUND & AIMS: The host immune response during the chronic phase of hepatitis C virus infection varies among individuals; some patients have a no interferon (IFN) response in the liver, whereas others have full activation IFN-stimulated genes (ISGs). Preactivation of this endogenous IFN system is associated with nonresponse to pegylated IFN-α (pegIFN-α) and ribavirin. Genome-wide association studies have associated allelic variants near the IL28B (IFNλ3) gene with treatment response. We investigated whether IL28B genotype determines the constitutive expression of ISGs in the liver and compared the abilities of ISG levels and IL28B genotype to predict treatment outcome. METHODS: We genotyped 109 patients with chronic hepatitis C for IL28B allelic variants and quantified the hepatic expression of ISGs and of IL28B. Decision tree ensembles, in the form of a random forest classifier, were used to calculate the relative predictive power of these different variables in a multivariate analysis. RESULTS: The minor IL28B allele was significantly associated with increased expression of ISG. However, stratification of the patients according to treatment response revealed increased ISG expression in nonresponders, irrespective of IL28B genotype. Multivariate analysis of ISG expression, IL28B genotype, and several other factors associated with response to therapy identified ISG expression as the best predictor of treatment response. CONCLUSIONS: IL28B genotype and hepatic expression of ISGs are independent predictors of response to treatment with pegIFN-α and ribavirin in patients with chronic hepatitis C. The most accurate prediction of response was obtained with a 4-gene classifier comprising IFI27, ISG15, RSAD2, and HTATIP2.
Resumo:
RESUME En faisant référence à la notion de préjugé idéologique, ce travail s'intéresse à la manifestation d'une croyance qui oppose la culture à la nature lors de la classification et l'évaluation des individus. Nous proposons que cette croyance se manifeste par l'attribution de traits spécifiques aux groupes (traits culturels et naturels) et que sa fonction est de justifier la suprématie de l'homme bourgeois blanc occidental sur autrui. Ainsi, nous abordons la perception de plusieurs groupes ethniques de la part d'individus suisses. Notre travail est organisé en trois parties. La première partie présente une étude exploratoire dont l'objectif est de cerner les phénomènes étudiés. Les résultats mettent en évidence que l'attribution de traits culturels .positifs aux groupes ethniques est relativement indépendante de l'attribution de traits naturels positifs àceux-ci: les groupes perçus comme les plus culturels sont également perçus comme les plus naturels. De plus, l'attribution de traits naturels positifs semble sous-tendre une attitude favorable envers les groupes. La deuxième partie reprend les critères qu'identifient les notions de culture et de nature. Les études 2, 3 et 4 ont mis en évidence qu'il y au continuum dans la signification des traits par rapport à .l'être humain et à l'animal. Cela nous a amené sélectionner des traits attribués uniquement à l' être humain (culture) et des traits attribués davantage à l' animal qu'à l'être humain (nature). Les études 5 et 6 de la troisième partie montrent que, lorsqu'il est question de groupes ethniques, l'endogroupe dominant et ses alliés sont associés à la culture positive, alors que des exogroupes spécifiques sont associés à la nature positive (des exogroupes sujets au paternalisme). L'étude 7 confirme les résultats concernant l'endogroupe dominant et ses alliés avec des groupes fictifs et il met en évidence que les membres du groupe dominant utilisent la notion de culture positive pour hiérarchiser les groupes. L'attribution de nature positive n'est pas prise en compte pour hiérarchiser des groupes fictifs. Pour conclure, les études montrent qu'il n'y a pas d'opposition entre la culture et la nature (positives): les membres du groupe ethnique dominant utilisent la notion de culture pour classifier et évaluer les individus sur une hiérarchie de valeurs. La notion de nature n'est pas utilisée pour hiérarchiser les groupes, mais elle identifie des exogroupes spécifiques.
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
Background: We address the problem of studying recombinational variations in (human) populations. In this paper, our focus is on one computational aspect of the general task: Given two networks G1 and G2, with both mutation and recombination events, defined on overlapping sets of extant units the objective is to compute a consensus network G3 with minimum number of additional recombinations. We describe a polynomial time algorithm with a guarantee that the number of computed new recombination events is within ϵ = sz(G1, G2) (function sz is a well-behaved function of the sizes and topologies of G1 and G2) of the optimal number of recombinations. To date, this is the best known result for a network consensus problem.Results: Although the network consensus problem can be applied to a variety of domains, here we focus on structure of human populations. With our preliminary analysis on a segment of the human Chromosome X data we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. These results have been verified independently using traditional manual procedures. To the best of our knowledge, this is the first recombinations-based characterization of human populations. Conclusion: We show that our mathematical model identifies recombination spots in the individual haplotypes; the aggregate of these spots over a set of haplotypes defines a recombinational landscape that has enough signal to detect continental as well as population divide based on a short segment of Chromosome X. In particular, we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. The agreement with mutation-based analysis can be viewed as an indirect validation of our results and the model. Since the model in principle gives us more information embedded in the networks, in our future work, we plan to investigate more non-traditional questions via these structures computed by our methodology.