203 resultados para normalisation
Resumo:
Individuals with Williams syndrome (WS) demonstrate impaired visuo-spatial abilities in comparison to their level of verbal ability. In particular, visuo-spatial construction is an area of relative weakness. It has been hypothesised that poor or atypical location coding abilities contribute strongly to the impaired abilities observed on construction and drawing tasks [Farran, E. K., & Jarrold, C. (2005). Evidence for unusual spatial location coding in Williams syndrome: An explanation for the local bias in visuo-spatial construction tasks? Brain and Cognition, 59, 159-172; Hoffman, J. E., Landau, B., & Pagani, B. (2003). Spatial breakdown in spatial construction: Evidence from eye fixations in children with Williams syndrome. Cognitive Psychology, 46, 260-301]. The current experiment investigated location memory in WS. Specifically, the precision of remembered locations was measured as well as the biases and strategies that were involved in remembering those locations. A developmental trajectory approach was employed; WS performance was assessed relative to the performance of typically developing (TD) children ranging from 4- to 8-year-old. Results showed differential strategy use in the WS and TD groups. WS performance was most similar to the level of a TD 4-year-old and was additionally impaired by the addition of physical category boundaries. Despite their low level of ability, the WS group produced a pattern of biases in performance which pointed towards evidence of a subdivision effect, as observed in TD older children and adults. In contrast, the TD children showed a different pattern of biases, which appears to be explained by a normalisation strategy. In summary, individuals with WS do not process locations in a typical manner. This may have a negative impact on their visuo-spatial construction and drawing abilities. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Background: Microarray based comparative genomic hybridisation (CGH) experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results: The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion: After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.
Resumo:
Recent evidence suggests that an area in the dorsal medial prefrontal cortex (dorsal nexus) shows dramatic increases in connectivity across a network of brain regions in depressed patients during the resting state;1 this increase in connectivity is suggested to represent hotwiring of areas involved in disparate cognitive and emotional functions.1, 2, 3 Sheline et al.1 concluded that antidepressant action may involve normalisation of the elevated resting state functional connectivity seen in depressed patients. However, the effects of conventional pharmacotherapy for depression on this resting state functional connectivity is not known and the effects of antidepressant treatment in depressed patients may be confounded by change in symptoms following treatment.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
There is an on-going debate on the environmental effects of genetically modified crops to which this paper aims to contribute. First, data on environmental impacts of genetically modified (GM) and conventional crops are collected from peer-reviewed journals, and secondly an analysis is conducted in order to examine which crop type is less harmful for the environment. Published data on environmental impacts are measured using an array of indicators, and their analysis requires their normalisation and aggregation. Taking advantage of composite indicators literature, this paper builds composite indicators to measure the impact of GM and conventional crops in three dimensions: (1) non-target key species richness, (2) pesticide use, and (3) aggregated environmental impact. The comparison between the three composite indicators for both crop types allows us to establish not only a ranking to elucidate which crop is more convenient for the environment but the probability that one crop type outperforms the other from an environmental perspective. Results show that GM crops tend to cause lower environmental impacts than conventional crops for the analysed indicators.
Resumo:
Anti-spoofing is attracting growing interest in biometrics, considering the variety of fake materials and new means to attack biometric recognition systems. New unseen materials continuously challenge state-of-the-art spoofing detectors, suggesting for additional systematic approaches to target anti-spoofing. By incorporating liveness scores into the biometric fusion process, recognition accuracy can be enhanced, but traditional sum-rule based fusion algorithms are known to be highly sensitive to single spoofed instances. This paper investigates 1-median filtering as a spoofing-resistant generalised alternative to the sum-rule targeting the problem of partial multibiometric spoofing where m out of n biometric sources to be combined are attacked. Augmenting previous work, this paper investigates the dynamic detection and rejection of livenessrecognition pair outliers for spoofed samples in true multi-modal configuration with its inherent challenge of normalisation. As a further contribution, bootstrap aggregating (bagging) classifiers for fingerprint spoof-detection algorithm is presented. Experiments on the latest face video databases (Idiap Replay- Attack Database and CASIA Face Anti-Spoofing Database), and fingerprint spoofing database (Fingerprint Liveness Detection Competition 2013) illustrate the efficiency of proposed techniques.
Resumo:
This paper investigates the potential of fusion at normalisation/segmentation level prior to feature extraction. While there are several biometric fusion methods at data/feature level, score level and rank/decision level combining raw biometric signals, scores, or ranks/decisions, this type of fusion is still in its infancy. However, the increasing demand to allow for more relaxed and less invasive recording conditions, especially for on-the-move iris recognition, suggests to further investigate fusion at this very low level. This paper focuses on the approach of multi-segmentation fusion for iris biometric systems investigating the benefit of combining the segmentation result of multiple normalisation algorithms, using four methods from two different public iris toolkits (USIT, OSIRIS) on the public CASIA and IITD iris datasets. Evaluations based on recognition accuracy and ground truth segmentation data indicate high sensitivity with regards to the type of errors made by segmentation algorithms.
Resumo:
Cet article porte sur les deux premiers romans de Faïza Guène : Kiffe kiffe demain (2004) et Du rêve pour les oufs (2006), ainsi que les traductions en langue suédoise : Kiffe kiffe imorgon (2006) et Drömmar för dårar (2008). Nous présentons d’abord quelques mots et expressions dans les textes originaux qui sont porteurs de la culture maghrébine, pour voir comment ces termes sont traduits en suédois. Ensuite, nous discutons quelques mots qui sont porteurs de la culture française. L’étude porte également sur l’oralité et le registre argotique, qui sont des traits caractéristiques de la prose de Faïza Guène. Or, si l’oralité est, dans l’ensemble, bien transférée en langue suédoise, il s’est avéré impossible de la rendre par les mêmes moyens dans la langue cible. Par conséquent, le procédé de compensation est fréquemment utilisé dans les textes traduits. Une conclusion de notre étude est que le côté argotique et « beur » des ouvrages est un peu moins développé dans les traductions que dans les textes originaux. Pour cette raison, on peut parler d’un procédé de normalisation : le texte cible a parfois tendance à devenir moins singulier – ou, si l’on veut, plus « normal », plus « neutre » que le texte source.
Resumo:
L’article porte sur les deux premiers romans de Faïza Guène : Kiffe kiffe demain (2004) et Du rêve pour les oufs (2006), ainsi que sur les traductions en langue suédoise Kiffe kiffe imorgon (2006) et Drömmar för dårar (2008). Nous nous intéressons aux mots et aux expressions qui sont porteurs de la culture maghrébine, pour voir comment ces termes sont traduits en suédois. Nous étudions aussi l’oralité et le registre argotique, qui sont des traits caractéristiques de la prose de Guène. Nous constatons que certains termes d’origine arabe du texte source sont traduits par des mots ayant une autre étymologie, ce qui rend la présence maghrébine un peu moins visible dans le texte cible. Nous constatons aussi que l’oralité du texte source est transférée dans le texte cible, mais par d’autres moyens – un procédé de compensation est souvent utilisé. Le registre argotique paraît un peu plus saillant dans les romans français que dans les versions traduites. L’exemple le plus frappant est le discours des personnages dans Du rêve pour les oufs, qui doit être traduit en « français standard » par le moyen de notes de bas de page, pour assurer la compréhension du lecteur implicite du texte original – phénomène qui n’a pas d’équivalent dans la traduction suédoise. Ce procédé de normalisation rend le texte cible plus neutre et, peut-être, un peu moins singulier que l’original.
Resumo:
Pós-graduação em Estudos Linguísticos - IBILCE
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Este trabalho problematiza os discursos sobre gênero e sexualidade,produzidos em um componente curricular desenvolvido em uma escola em regime de convênio entre a Igreja Católica e o Estado. O questionamento que movimenta a pesquisa volta-se para: Quais discursos sobre gênero e sexualidade são produzidos no componente curricular Aspectos da Vida Cidadã do ensino fundamental num colégio em regime de convênio entre a Diocese de Abaetetuba e a Secretaria de Estado de Educação do Pará (SEDUC)? Essa questão central engendra outros questionamentos: Quais as condições de emergência do componente curricular AVC para abordar as questões de gênero e sexualidade? Que formas de saber ancoram e apóiam o discurso sobre gênero e sexualidade no referido componente curricular? Quais jogos de poder entre diferentes campos discursivos produzem tal discurso? Quais relações de saber-poder produzem e põem em funcionamento as discussões sobre gênero e sexualidade como preocupação desse componente curricular?A pesquisa utiliza operadores teórico-metodológicos foucaultianos que oferecem ferramentas analíticas para problematizar gênero e sexualidade como uma constituição histórica (SCOTT, 1995, LOURO, 1997; BUTLER, 2003; ALTMANN, 2005) e, sobretudo para analisar os discursos sobre gênero e sexualidade em sua materialidade enunciativa (FOUCAULT, 2002, 2005, 2006). A análise se pautou em enunciados extraídos de documentos institucionais que permitiram rastrear os discursos e inquirir as relações de saber-poder e de práticas de governamento dos sujeitos em relação a gênero e sexualidade. No “colégio” que funciona em regime de convênio entre a Secretaria Estadual de Educação do Estado do Pará (SEDUC) e a Diocese de Abaetetuba, coexistem duas orientações, uma laica e uma religiosa, que entram em embate na composição de forças que constituem os discursos sobre gênero e sexualidade. A análise indicou que tais discursos são construídos a partir de diferentes formações discursivas, entre elas, a pedagógica crítica e a católica, e que estes se transformam em dispositivos de normalização colocados em funcionamento a partir de relações de saber-poder que incidem principalmente sobre os corpos individuais e coletivos dos sujeitos discentes no colégio. Por fim, ao perscrutar os enunciados que reverberam concepções de gênero e sexualidade marcados pela singularidade de um colégio com orientação laica e religiosa, o estudo investigou a produção de sujeitos a partir dos princípios conjugados pela insígnia assumida pelo colégio, a articulação entre “fé e ciência”, materializada em exortações, prescrições, aconselhamentos, proposições voltadas para a formação de sujeitos cristãos e cidadãos capazes de exercer domínio sobre seu corpo e sua sexualidade.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
La mission de ce rapport se trouvée dans un projet major de mise en place de la TPM dans l’usine de Biotechnologie à Huningue. Le même sera utilisé comme pilote pour les autres sites de l’entreprise. Le but spécifique de ma mission au sein de Novartis, était d’identifier les activités principales des équipes de maintenance, étudier la façon de travailler et la manière comment ils étaient renseignées dans leurs Ordres de Travail (OT). Ensuite, proposer un standard de flux de travail et façon de renseigner les OT et ainsi organiser la façon de travailler et pouvoir planifier les activités de maintenance, ayant une bonne base de données pour faire les indicateurs
Resumo:
This paper aims at observing the particular case of an author’s and self-translator’s style concerning normalisation features present in the self-translation. Our study has its theoretical starting point based on Baker’s proposal (1993, 1995, 1996, 2000) and Scott’s investigation in order to carry out an analysis of the use of linguistic choices involving evidence of normalization. The results point out that, while participating as a self-translator, Ubaldo Ribeiro reveals individual, distinctive and preferred stylistic options which present less lexical variation; in contrast, in the situation of participating as an author, Ubaldo Ribeiro shows stylistic choices of higher lexical diversity. Observed normalisation features reveal conscious or subconscious use of fluency strategies, making the target text easier to read. Due to his renowned sound command of the target language, the results may also suggest the challenges during the translated text re-creation process faced as a self-translator could have been greater than the challenges during the previous original text creation process faced as an author