15 resultados para Convergent-divergent half-sib selection
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Introduction - No validated protocol exists for the measurement of the prism fusion ranges. Many studies report on how fusional vergence ranges can be measured using different techniques (rotary prism, prism bar, loose prisms and synoptophore) and stimuli, leading to different ranges being reported in the literature. Repeatability of the different methods available and the equivalence between them it is also important. In addition, some studies available do not agree in what order fusional vergence should be measured to provide the essential information on which to base clinical judgements on compensation of deviations. When performing fusional vergence testing the most commonly accepted clinical technique is to first measure negative fusional vergence followed by a measurement of positive fusional vergence to avoid affecting the value of vergence recovery because of excessive stimulation of convergence. Von Noorden recommend using vertical fusion amplitudes in between horizontal amplitudes (base-out, base-up, base-in, and base down) to prevent vergence adaptation. Others place the base of the prism in the direction opposite to that used to measure the deviation to increase the vergence demand. Objectives - The purpose of this review is to assess and compare the accuracy of tests for measurement of fusional vergence. Secondary objectives are to investigate sources of heterogeneity of diagnostic accuracy including: age; variation in method of assessment; study design; study size; type of strabismus (convergent, divergent, vertical, cycle); severity of strabismus (constant/intermittent/latent).
Resumo:
O papel crucial da escola na sociedade e o exercício da atividade profissional como docente, com um olhar atento sobre o traçar das políticas educativas, motivou a elaboração deste trabalho de investigação, que tem como objeto de estudo os papéis desempenhados pelos diretores das escolas estatais e não estatais e como objetivos específicos estudar o impacto da legislação emanada pela tutela, nas escolas públicas e privadas e analisar as convergências e divergências nas conceções e práticas dos seus diretores. As dimensões analíticas exploradas no estudo abrangem as conceções gestionárias dos diretores quanto aos modelos de gestão, às práticas de autonomia, ao serviço educativo e à prestação de contas. Este trabalho de natureza qualitativa foca o olhar sobre um grupo restrito de atores educativos que foram escolhidos devido ao papel que desempenham na organização educativa e porque a publicação do Decreto- Lei 75/ 2008 de 22 de abril, trouxe alterações à escola pública. A tradição de direção colegial que vigorava nas organizações educativas estatais foi quebrada. O presidente do conselho diretivo é doravante substituído pelo diretor que passa a delegar competências, a designar equipas e a prestar contas à tutela e comunidade educativa à semelhança do diretor da escola privada. O estudo de caso apresentado foi realizado em três escolas públicas e em três colégios privados com recurso a entrevistas semiestruturadas e à análise documental. As conclusões deste trabalho remetem para a existência de muitos pontos de convergência entre a opinião dos diretores da escola pública e privada. As temáticas relativas à autonomia, escolha do pessoal docente e prestação de contas, são olhadas pela mesma perspetiva. A autonomia é vista como “uma miragem”; uma “terra prometida” (Lima e Afonso, 1995). A prestação de contas é exigida aos diretores do ensino estatal e do privado através de instrumentos próximos. As principais divergências situam-se ao nível do menor interesse demonstrado, por parte da direção da escola privada, pela oferta de cursos profissionais e pelo menor investimento em estratégias para a prevenção do abandono escolar, que é considerado pouco significativo na escola não estatal. A defesa da escolha de escola e da modalidade de cheque ensino são outros dos pontos que marcam a divergência entre estes diretores. Abstract: This investigative paper - whose objective is the study of the role of the school directors, both State and non-state, and the impact of legislation on both State and private schools, as well as the analysis of the convergent and divergent conceptions and practices of these directors – is motivated by the crucial role played by schools in our society and by the professional activity of the teacher, with an attentive look at the educational practices. The analytical dimension explored in this study includes the various concepts of management of the school director as models of management, as well as practices in self-sufficiency, budget control and educational service to the community. This study has a qualitative nature and focuses on a small group of individuals who were chosen for the role they play in the whole educational structure, considering that the Decree nº 75/2008, published on April the 22nd, determined alterations to the public school system. The traditional method of control of the public school system has, henceforth, been changed. The headmaster is now substituted by a director who delegates his functions, makes up work teams and elaborates the school budget which is presented to the respective governmental ministry and the community, much like as what happens in private schools. The present study encompasses three public schools and three private schools, the methods of study being semi-structured interviews as well as the consultation of documentation. The conclusions point to many convergent opinions of the school directors of both the public and the private sector. The school directors of both public and private schools used in this study share the same opinion as to the factors involved in the selection of teachers, the elaboration of the school budget and the implementation of self-sufficiency policies. These self-sufficiency policies are seen as a “mirage” or a “promised land” (Lima and Afonso, 1995). The school budget and its management practices are implemented in both public and private schools through similar instruments. The principal differences are noted on smaller, less interesting points, on the part of the direction of the private schools, and result from the elaboration of professional courses and minor investment in the strategies, oriented to the prevention of school drop-outs, which is considered of little significance in the private school sector. The other factors of divergence result from the right to choose the type of school desired and the type of teaching implemented.
Resumo:
We start by studying the existence of positive solutions for the differential equation u '' = a(x)u - g(u), with u ''(0) = u(+infinity) = 0, where a is a positive function, and g is a power or a bounded function. In other words, we are concerned with even positive homoclinics of the differential equation. The main motivation is to check that some well-known results concerning the existence of homoclinics for the autonomous case (where a is constant) are also true for the non-autonomous equation. This also motivates us to study the analogous fourth-order boundary value problem {u((4)) - cu '' + a(x)u = vertical bar u vertical bar(p-1)u u'(0) = u'''(0) = 0, u(+infinity) = u'(+infinity) = 0 for which we also find nontrivial (and, in some instances, positive) solutions.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.
Resumo:
Reclaimed water from small wastewater treatment facilities in the rural areas of the Beira Interior region (Portugal) may constitute an alternative water source for aquifer recharge. A 21-month monitoring period in a constructed wetland treatment system has shown that 21,500 m(3) year(-1) of treated wastewater (reclaimed water) could be used for aquifer recharge. A GIS-based multi-criteria analysis was performed, combining ten thematic maps and economic, environmental and technical criteria, in order to produce a suitability map for the location of sites for reclaimed water infiltration. The areas chosen for aquifer recharge with infiltration basins are mainly composed of anthrosol with more than 1 m deep and fine sand texture, which allows an average infiltration velocity of up to 1 m d(-1). These characteristics will provide a final polishing treatment of the reclaimed water after infiltration (soil aquifer treatment (SAT)), suitable for the removal of the residual load (trace organics, nutrients, heavy metals and pathogens). The risk of groundwater contamination is low since the water table in the anthrosol areas ranges from 10 m to 50 m. Oil the other hand, these depths allow a guaranteed unsaturated area suitable for SAT. An area of 13,944 ha was selected for study, but only 1607 ha are suitable for reclaimed water infiltration. Approximately 1280 m(2) were considered enough to set up 4 infiltration basins to work in flooding and drying cycles.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Edificações
Resumo:
Research on the problem of feature selection for clustering continues to develop. This is a challenging task, mainly due to the absence of class labels to guide the search for relevant features. Categorical feature selection for clustering has rarely been addressed in the literature, with most of the proposed approaches having focused on numerical data. In this work, we propose an approach to simultaneously cluster categorical data and select a subset of relevant features. Our approach is based on a modification of a finite mixture model (of multinomial distributions), where a set of latent variables indicate the relevance of each feature. To estimate the model parameters, we implement a variant of the expectation-maximization algorithm that simultaneously selects the subset of relevant features, using a minimum message length criterion. The proposed approach compares favourably with two baseline methods: a filter based on an entropy measure and a wrapper based on mutual information. The results obtained on synthetic data illustrate the ability of the proposed expectation-maximization method to recover ground truth. An application to real data, referred to official statistics, shows its usefulness.
Resumo:
Electrocardiography (ECG) biometrics is emerging as a viable biometric trait. Recent developments at the sensor level have shown the feasibility of performing signal acquisition at the fingers and hand palms, using one-lead sensor technology and dry electrodes. These new locations lead to ECG signals with lower signal to noise ratio and more prone to noise artifacts; the heart rate variability is another of the major challenges of this biometric trait. In this paper we propose a novel approach to ECG biometrics, with the purpose of reducing the computational complexity and increasing the robustness of the recognition process enabling the fusion of information across sessions. Our approach is based on clustering, grouping individual heartbeats based on their morphology. We study several methods to perform automatic template selection and account for variations observed in a person's biometric data. This approach allows the identification of different template groupings, taking into account the heart rate variability, and the removal of outliers due to noise artifacts. Experimental evaluation on real world data demonstrates the advantages of our approach.
Resumo:
In research on Silent Speech Interfaces (SSI), different sources of information (modalities) have been combined, aiming at obtaining better performance than the individual modalities. However, when combining these modalities, the dimensionality of the feature space rapidly increases, yielding the well-known "curse of dimensionality". As a consequence, in order to extract useful information from this data, one has to resort to feature selection (FS) techniques to lower the dimensionality of the learning space. In this paper, we assess the impact of FS techniques for silent speech data, in a dataset with 4 non-invasive and promising modalities, namely: video, depth, ultrasonic Doppler sensing, and surface electromyography. We consider two supervised (mutual information and Fisher's ratio) and two unsupervised (meanmedian and arithmetic mean geometric mean) FS filters. The evaluation was made by assessing the classification accuracy (word recognition error) of three well-known classifiers (knearest neighbors, support vector machines, and dynamic time warping). The key results of this study show that both unsupervised and supervised FS techniques improve on the classification accuracy on both individual and combined modalities. For instance, on the video component, we attain relative performance gains of 36.2% in error rates. FS is also useful as pre-processing for feature fusion. Copyright © 2014 ISCA.
Resumo:
In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion. © 2014 Springer-Verlag Berlin Heidelberg.
Resumo:
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.
Resumo:
Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.
Resumo:
In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion.
Resumo:
Materials selection is a matter of great importance to engineering design and software tools are valuable to inform decisions in the early stages of product development. However, when a set of alternative materials is available for the different parts a product is made of, the question of what optimal material mix to choose for a group of parts is not trivial. The engineer/designer therefore goes about this in a part-by-part procedure. Optimizing each part per se can lead to a global sub-optimal solution from the product point of view. An optimization procedure to deal with products with multiple parts, each with discrete design variables, and able to determine the optimal solution assuming different objectives is therefore needed. To solve this multiobjective optimization problem, a new routine based on Direct MultiSearch (DMS) algorithm is created. Results from the Pareto front can help the designer to align his/hers materials selection for a complete set of materials with product attribute objectives, depending on the relative importance of each objective.
Resumo:
In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.