936 resultados para Satellite selection algorithm
Resumo:
A combined strategy based on the computation of absorption energies, using the ZINDO/S semiempirical method, for a statistically relevant number of thermally sampled configurations extracted from QM/MM trajectories is used to establish a one-to-one correspondence between the structures of the different early intermediates (dark, batho, BSI, lumi) involved in the initial steps of the rhodopsin photoactivation mechanism and their optical spectra. A systematic analysis of the results based on a correlation-based feature selection algorithm shows that the origin of the color shifts among these intermediates can be mainly ascribed to alterations in intrinsic properties of the chromophore structure, which are tuned by several residues located in the protein binding pocket. In addition to the expected electrostatic and dipolar effects caused by the charged residues (Glu113, Glu181) and to strong hydrogen bonding with Glu113, other interactions such as π-stacking with Ala117 and Thr118 backbone atoms, van der Waals contacts with Gly114 and Ala292, and CH/π weak interactions with Tyr268, Ala117, Thr118, and Ser186 side chains are found to make non-negligible contributions to the modulation of the color tuning among the different rhodopsin photointermediates.
Resumo:
We investigate the relevance of morphological operators for the classification of land use in urban scenes using submetric panchromatic imagery. A support vector machine is used for the classification. Six types of filters have been employed: opening and closing, opening and closing by reconstruction, and opening and closing top hat. The type and scale of the filters are discussed, and a feature selection algorithm called recursive feature elimination is applied to decrease the dimensionality of the input data. The analysis performed on two QuickBird panchromatic images showed that simple opening and closing operators are the most relevant for classification at such a high spatial resolution. Moreover, mixed sets combining simple and reconstruction filters provided the best performance. Tests performed on both images, having areas characterized by different architectural styles, yielded similar results for both feature selection and classification accuracy, suggesting the generalization of the feature sets highlighted.
Resumo:
Background To analyse the extent and profile of outpatient regular dispensation of antipsychotics, both in combination and monotherapy, in the Barcelona Health Region (Spain), focusing on the use of clozapine and long-acting injections (LAI). Methods Antipsychotic drugs dispensed for people older than 18 and processed by the Catalan Health Service during 2007 were retrospectively reviewed. First and second generation antipsychotic drugs (FGA and SGA) from the Anatomical Therapeutic Chemical classification (ATC) code N05A (except lithium) were included. A patient selection algorithm was designed to identify prescriptions regularly dispensed. Variables included were age, gender, antipsychotic type, route of administration and number of packages dispensed. Results A total of 117,811 patients were given any antipsychotic, of whom 71,004 regularly received such drugs. Among the latter, 9,855 (13.9%) corresponded to an antipsychotic combination, 47,386 (66.7%) to monotherapy and 13,763 (19.4%) to unspecified combinations. Of the patients given antipsychotics in association, 58% were men. Olanzapine (37.1%) and oral risperidone (36.4%) were the most common dispensations. Analysis of the patients dispensed two antipsychotics (57.8%) revealed 198 different combinations, the most frequent being the association of FGA and SGA (62.0%). Clozapine was dispensed to 2.3% of patients. Of those who were receiving antipsychotics in combination, 6.6% were given clozapine, being clozapine plus amisulpride the most frequent association (22.8%). A total of 3.800 patients (5.4%) were given LAI antipsychotics, and 2.662 of these (70.1%) were in combination. Risperidone was the most widely used LAI. Conclusions The scant evidence available regarding the efficacy of combining different antipsychotics contrasts with the high number and variety of combinations prescribed to outpatients, as well as with the limited use of clozapine. Background To analyse the extent and profile of outpatient regular dispensation of antipsychotics, both in combination and monotherapy, in the Barcelona Health Region (Spain), focusing on the use of clozapine and long-acting injections (LAI). Methods Antipsychotic drugs dispensed for people older than 18 and processed by the Catalan Health Service during 2007 were retrospectively reviewed. First and second generation antipsychotic drugs (FGA and SGA) from the Anatomical Therapeutic Chemical classification (ATC) code N05A (except lithium) were included. A patient selection algorithm was designed to identify prescriptions regularly dispensed. Variables included were age, gender, antipsychotic type, route of administration and number of packages dispensed. Results A total of 117,811 patients were given any antipsychotic, of whom 71,004 regularly received such drugs. Among the latter, 9,855 (13.9%) corresponded to an antipsychotic combination, 47,386 (66.7%) to monotherapy and 13,763 (19.4%) to unspecified combinations. Of the patients given antipsychotics in association, 58% were men. Olanzapine (37.1%) and oral risperidone (36.4%) were the most common dispensations. Analysis of the patients dispensed two antipsychotics (57.8%) revealed 198 different combinations, the most frequent being the association of FGA and SGA (62.0%). Clozapine was dispensed to 2.3% of patients. Of those who were receiving antipsychotics in combination, 6.6% were given clozapine, being clozapine plus amisulpride the most frequent association (22.8%). A total of 3.800 patients (5.4%) were given LAI antipsychotics, and 2.662 of these (70.1%) were in combination. Risperidone was the most widely used LAI. Conclusions The scant evidence available regarding the efficacy of combining different antipsychotics contrasts with the high number and variety of combinations prescribed to outpatients, as well as with the limited use of clozapine.
Resumo:
Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.
Resumo:
Les réseaux optiques à commutation de rafales (OBS) sont des candidats pour jouer un rôle important dans le cadre des réseaux optiques de nouvelle génération. Dans cette thèse, nous nous intéressons au routage adaptatif et au provisionnement de la qualité de service dans ce type de réseaux. Dans une première partie de la thèse, nous nous intéressons à la capacité du routage multi-chemins et du routage alternatif (par déflection) à améliorer les performances des réseaux OBS, pro-activement pour le premier et ré-activement pour le second. Dans ce contexte, nous proposons une approche basée sur l’apprentissage par renforcement où des agents placés dans tous les nœuds du réseau coopèrent pour apprendre, continuellement, les chemins du routage et les chemins alternatifs optimaux selon l’état actuel du réseau. Les résultats numériques montrent que cette approche améliore les performances des réseaux OBS comparativement aux solutions proposées dans la littérature. Dans la deuxième partie de cette thèse, nous nous intéressons au provisionnement absolu de la qualité de service où les performances pire-cas des classes de trafic de priorité élevée sont garanties quantitativement. Plus spécifiquement, notre objectif est de garantir la transmission sans pertes des rafales de priorité élevée à l’intérieur du réseau OBS tout en préservant le multiplexage statistique et l’utilisation efficace des ressources qui caractérisent les réseaux OBS. Aussi, nous considérons l’amélioration des performances du trafic best effort. Ainsi, nous proposons deux approches : une approche basée sur les nœuds et une approche basée sur les chemins. Dans l’approche basée sur les nœuds, un ensemble de longueurs d’onde est assigné à chaque nœud du bord du réseau OBS pour qu’il puisse envoyer son trafic garanti. Cette assignation prend en considération les distances physiques entre les nœuds du bord. En outre, nous proposons un algorithme de sélection des longueurs d’onde pour améliorer les performances des rafales best effort. Dans l’approche basée sur les chemins, le provisionnement absolu de la qualité de service est fourni au niveau des chemins entre les nœuds du bord du réseau OBS. À cette fin, nous proposons une approche de routage et d’assignation des longueurs d’onde qui a pour but la réduction du nombre requis de longueurs d’onde pour établir des chemins sans contentions. Néanmoins, si cet objectif ne peut pas être atteint à cause du nombre limité de longueurs d’onde, nous proposons de synchroniser les chemins en conflit sans le besoin pour des équipements additionnels. Là aussi, nous proposons un algorithme de sélection des longueurs d’onde pour les rafales best effort. Les résultats numériques montrent que l’approche basée sur les nœuds et l’approche basée sur les chemins fournissent le provisionnement absolu de la qualité de service pour le trafic garanti et améliorent les performances du trafic best effort. En outre, quand le nombre de longueurs d’ondes est suffisant, l’approche basée sur les chemins peut accommoder plus de trafic garanti et améliorer les performances du trafic best effort par rapport à l’approche basée sur les nœuds.
Resumo:
Les scores de propension (PS) sont fréquemment utilisés dans l’ajustement pour des facteurs confondants liés au biais d’indication. Cependant, ils sont limités par le fait qu’ils permettent uniquement l’ajustement pour des facteurs confondants connus et mesurés. Les scores de propension à hautes dimensions (hdPS), une variante des PS, utilisent un algorithme standardisé afin de sélectionner les covariables pour lesquelles ils vont ajuster. L’utilisation de cet algorithme pourrait permettre l’ajustement de tous les types de facteurs confondants. Cette thèse a pour but d’évaluer la performance de l’hdPS vis-à-vis le biais d’indication dans le contexte d’une étude observationnelle examinant l’effet diabétogénique potentiel des statines. Dans un premier temps, nous avons examiné si l’exposition aux statines était associée au risque de diabète. Les résultats de ce premier article suggèrent que l’exposition aux statines est associée avec une augmentation du risque de diabète et que cette relation est dose-dépendante et réversible dans le temps. Suite à l’identification de cette association, nous avons examiné dans un deuxième article si l’hdPS permettait un meilleur ajustement pour le biais d’indication que le PS; cette évaluation fut entreprise grâce à deux approches: 1) en fonction des mesures d’association ajustées et 2) en fonction de la capacité du PS et de l’hdPS à sélectionner des sous-cohortes appariées de patients présentant des caractéristiques similaires vis-à-vis 19 caractéristiques lorsqu’ils sont utilisés comme critère d’appariement. Selon les résultats présentés dans le cadre du deuxième article, nous avons démontré que l’évaluation de la performance en fonction de la première approche était non concluante, mais que l’évaluation en fonction de la deuxième approche favorisait l’hdPS dans son ajustement pour le biais d’indication. Le dernier article de cette thèse a cherché à examiner la performance de l’hdPS lorsque des facteurs confondants connus et mesurés sont masqués à l’algorithme de sélection. Les résultats de ce dernier article indiquent que l’hdPS pourrait, au moins partiellement, ajuster pour des facteurs confondants masqués et qu’il pourrait donc potentiellement ajuster pour des facteurs confondants non mesurés. Ensemble ces résultats indiquent que l’hdPS serait supérieur au PS dans l’ajustement pour le biais d’indication et supportent son utilisation lors de futures études observationnelles basées sur des données médico-administratives.
Resumo:
There are numerous text documents available in electronic form. More and more are becoming available every day. Such documents represent a massive amount of information that is easily accessible. Seeking value in this huge collection requires organization; much of the work of organizing documents can be automated through text classification. The accuracy and our understanding of such systems greatly influences their usefulness. In this paper, we seek 1) to advance the understanding of commonly used text classification techniques, and 2) through that understanding, improve the tools that are available for text classification. We begin by clarifying the assumptions made in the derivation of Naive Bayes, noting basic properties and proposing ways for its extension and improvement. Next, we investigate the quality of Naive Bayes parameter estimates and their impact on classification. Our analysis leads to a theorem which gives an explanation for the improvements that can be found in multiclass classification with Naive Bayes using Error-Correcting Output Codes. We use experimental evidence on two commonly-used data sets to exhibit an application of the theorem. Finally, we show fundamental flaws in a commonly-used feature selection algorithm and develop a statistics-based framework for text feature selection. Greater understanding of Naive Bayes and the properties of text allows us to make better use of it in text classification.
Resumo:
We consider a fully complex-valued radial basis function (RBF) network for regression and classification applications. For regression problems, the locally regularised orthogonal least squares (LROLS) algorithm aided with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF models, is extended to the fully complex-valued RBF (CVRBF) network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully CVRBF network. The proposed fully CVRBF network is also applied to four-class classification problems that are typically encountered in communication systems. A complex-valued orthogonal forward selection algorithm based on the multi-class Fisher ratio of class separability measure is derived for constructing sparse CVRBF classifiers that generalise well. The effectiveness of the proposed algorithm is demonstrated using the example of nonlinear beamforming for multiple-antenna aided communication systems that employ complex-valued quadrature phase shift keying modulation scheme. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.
Resumo:
Accurate single trial P300 classification lends itself to fast and accurate control of Brain Computer Interfaces (BCIs). Highly accurate classification of single trial P300 ERPs is achieved by characterizing the EEG via corresponding stationary and time-varying Wackermann parameters. Subsets of maximally discriminating parameters are then selected using the Network Clustering feature selection algorithm and classified with Naive-Bayes and Linear Discriminant Analysis classifiers. Hence the method is assessed on two different data-sets from BCI competitions and is shown to produce accuracies of between approximately 70% and 85%. This is promising for the use of Wackermann parameters as features in the classification of single-trial ERP responses.
Resumo:
Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.
Resumo:
O trabalho tem como objetivo aplicar uma modelagem não linear ao Produto Interno Bruto brasileiro. Para tanto foi testada a existência de não linearidade do processo gerador dos dados com a metodologia sugerida por Castle e Henry (2010). O teste consiste em verificar a persistência dos regressores não lineares no modelo linear irrestrito. A seguir a série é modelada a partir do modelo autoregressivo com limiar utilizando a abordagem geral para específico na seleção do modelo. O algoritmo Autometrics é utilizado para escolha do modelo não linear. Os resultados encontrados indicam que o Produto Interno Bruto do Brasil é melhor explicado por um modelo não linear com três mudanças de regime, que ocorrem no inicio dos anos 90, que, de fato, foi um período bastante volátil. Através da modelagem não linear existe o potencial para datação de ciclos, no entanto os resultados encontrados não foram suficientes para tal análise.
Resumo:
O trabalho tem como objetivo verificar a existência e a relevância dos Efeitos Calendário em indicadores industriais. São explorados modelos univariados lineares para o indicador mensal da produção industrial brasileira e alguns de seus componentes. Inicialmente é realizada uma análise dentro da amostra valendo-se de modelos estruturais de espaço-estado e do algoritmo de seleção Autometrics, a qual aponta efeito significante da maioria das variáveis relacionadas ao calendário. Em seguida, através do procedimento de Diebold-Mariano (1995) e do Model Confidence Set, proposto por Hansen, Lunde e Nason (2011), são realizadas comparações de previsões de modelos derivados do Autometrics com um dispositivo simples de Dupla Diferença para um horizonte de até 24 meses à frente. Em geral, os modelos Autometrics que consideram as variáveis de calendário se mostram superiores nas projeções de 1 a 2 meses adiante e superam o modelo simples em todos os horizontes. Quando se agrega os componentes de categoria de uso para formar o índice industrial total, há evidências de ganhos nas projeções de prazo mais curto.
Resumo:
Although non-technical losses automatic identification has been massively studied, the problem of selecting the most representative features in order to boost the identification accuracy has not attracted much attention in this context. In this paper, we focus on this problem applying a novel feature selection algorithm based on Particle Swarm Optimization and Optimum-Path Forest. The results demonstrated that this method can improve the classification accuracy of possible frauds up to 49% in some datasets composed by industrial and commercial profiles. © 2011 IEEE.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS