933 resultados para Estatística lexical
Resumo:
A number of functional neuroimaging studies with skilled readers consistently showed activation to visual words in the left mid-fusiform cortex in occipitotemporal sulcus (LMFC-OTS). Neuropsychological studies also showed that lesions at left ventral occipitotemporal areas result in impairment in visual word processing. Based on these empirical observations and some theoretical speculations, a few researchers postulated that the LMFC-OTS is responsible for instant parallel and holistic extraction of the abstract representation of letter strings, and labeled this piece of cortex as “visual word form area” (VWFA). Nonetheless, functional neuroimaging studies alone is basically a correlative rather than causal approach, and lesions in the previous studies were typically not constrained within LMFC-OTS but also involving other brain regions beyond this area. Given these limitations, it remains unanswered for three fundamental questions: is LMFC-OTS necessary for visual word processing? is this functionally selective for visual word processing while unnecessary for processing of non-visual word stimuli? what are its function properties in visual word processing? This thesis aimed to address these questions through a series of neuropsychological, anatomical and functional MRI experiments in four patients with different degrees of impairments in the left fusiform gyrus. Necessity: Detailed analysis of anatomical brain images revealed that the four patients had differential foci of brain infarction. Specifically, the LMFC-OTS was damaged in one patient, while it remained intact in the other three. Neuropsychological experiments showed that the patient with lesions in the LMFC-OTS had severe impairments in reading aloud and recognizing Chinese characters, i.e., pure alexia. The patient with intact LMFC-OTS but information from the left visual field (LVF) was blocked due to lesions in the splenium of corpus callosum, showed impairment in Chinese characters recognition when the stimuli were presented in the LVF but not in the RVF, i.e. left hemialexia. In contrast, the other two patients with intact LMFC-OTS had normal function in processing Chinese characters. The fMRI experiments demonstrated that there was no significant activation to Chinese characters in the LMFC-OTS of the pure alexic patient and of the patient with left hemialexia when the stimuli were presented in the LVF. On the other hand, this patient, when Chinese characters were presented in right visual field, and the other two with intact LMFC-OTS had activation in the LMFC-OTS. These results together point to the necessity of the LMFC-OTS for Chinese character processing. Selectivity: We tested selectivity of the LMFC-OTS for visual word processing through systematically examining the patients’ ability for processing visual vs. auditory words, and word vs. non-word visual stimuli, such as faces, objects and colors. Results showed that the pure alexic patients could normally process auditory words (expression, understanding and repetition of orally presented words) and non-word visual stimuli (faces, objects, colors and numbers). Although the patient showed some impairments in naming faces, objects and colors, his performance scores were only slightly lower or not significantly different relative to those of the patients with intact LMFC-OTS. These data provide compelling evidence that the LMFC-OTS is not requisite for processing non-visual word stimuli, thus has selectivity for visual word processing. Functional properties: With tasks involving multiple levels and aspects of word processing, including Chinese character reading, phonological judgment, semantic judgment, identity judgment of abstract visual word representation, lexical decision, perceptual judgment of visual word appearance, and dictation, copying, voluntary writing, etc., we attempted to reveal the most critical dysfunction caused by damage in the LMFC-OTS, thus to clarify the most essential function of this region. Results showed that in addition to dysfunctions in Chinese character reading, phonological and semantic judgment, the patient with lesions at LMFC-OTS failed to judge correctly whether two characters (including compound and simple characters) with different surface features (e.g., different fonts, printed vs. handwritten vs. calligraphy styles, simplified characters vs. traditional characters, different orientations of strokes or whole characters) had the same abstract representation. The patient initially showed severe impairments in processing both simple characters and compound characters. He could only copy a compound character in a stroke-by-stroke manner, but not by character-by-character or even by radical-by-radical manners. During the recovery process, namely five months later, the patient could complete the abstract representation tasks of simple characters, but showed no improvement for compound characters. However, he then could copy compound characters in a radical-by-radical manner. Furthermore, it seems that the recovery of copying paralleled to that of judgment of abstract representation. These observations indicate that lesions of the LMFC-OTS in the pure alexic patients caused several damage in the ability of extracting the abstract representation from lower level units to higher level units, and the patient had especial difficulty to extract the abstract representation of whole character from its secondary units (e.g., radicals or single characters) and this ability was resistant to recover from impairment. Therefore, the LMFC-OTS appears to be responsible for the multilevel (particularly higher levels) abstract representations of visual word form. Successful extraction seems independent on access to phonological and semantic information, given the alexic patient showed severe impairments in reading aloud and semantic processing on simple characters while maintenance of intact judgment on their abstract representation. However, it is also possible that the interaction between the abstract representation and its related information e.g. phonological and semantic information was damaged as well in this patient. Taken together, we conclude that: 1) the LMFC-OTS is necessary for Chinese character processing, 2) it is selective for Chinese character processing, and 3) its critical function is to extract multiple levels of abstract representation of visual word and possibly to transmit it to phonological and semantic systems.
Resumo:
In current days, many companies have carried out their branding strategies, because strong brand usually provides confidence and reduce risks to its consumers. No matter what a brand is based on tangible products or services, it will possess the common attributes of this category, and it also has its unique attributes. Brand attribute is defined as descriptive features, which are intrinsic characteristics, values or benefits endowed by users of the product or service (Keller, 1993; Romaniuk, 2003). The researches on models of brand multi-attributes are one of the most studied areas of consumer psychology (Werbel, 1978), and attribute weight is one of its key pursuits. Marketing practitioners also paid much attention to evaluations of attributes. Because those evaluations are relevant to the competitiveness and the strategies of promotion and new product development of the company (Green & Krieger, 1995). Then, how brand attributes correlate with weight judgments? And what features the attribute judgment reaction? Especially, what will feature the attribute weight judgment process of consumer who is facing the homogeneity of brands? Enlightened by the lexical hypothesis of researches on personality traits of psychology, this study choose search engine brands as the subject and adopt reaction time, which has been introduced into multi-attributes decision making by many researchers. Researches on independence of affect and cognition and on primacy of affect have cued us that we can categorize brand attributes into informative and affective ones. Meanwhile, Park has gone further to differentiate representative and experiential with functional attributes. This classification reflects the trend of emotion-branding and brand-consumer relationship. Three parts compose the research: the survey to collect attribute words, experiment one on affective primacy and experiment two on correlation between weight judgment and reaction. The results are as follow: In experiment one, we found: (1) affect words are not rated significantly from cognitive attributes, but affect words are responded faster than cognitive ones; (2) subjects comprehend and respond in different ways to functional attribute words and to representative and experiential words. In experiment two, we fund: (1) a significant negative correlation between attributes weight judgment and reaction time; (2) affective attributes will cause faster reaction than cognitive ones; (3) the reaction time difference between functional and representative or experiential attribute is significant, but there is no different between representative and experiential. In sum, we conclude that: (1): In word comprehension and weight judgment, we observed the affective primacy, even when the affect stimulus is presented as meaningful words. (2): The negative correlation between weight judgment and reaction time suggest us that the more important of attribute, the quicker of the reaction. (3): The difference on reaction time of functional, representative and experiential reflects the trend of emotional branding.
Resumo:
The researches of the CC's form processing mainly involved the effects of all kinds of form properties. In most of the cases, the researches were conducted after the lexical process completed. A few which was about early phases of visual perception focused on the process of feature extraction in character recognition. Up till now, nobody put forward a propose that we should study the form processing in the early phases of visual perception of CC. We hold that because the form processing occurs in the early phases of visual perception, we should study the processing prelexically. Moreover, visual perception of CC is a course during which the CC becomes clear gradually, so that the effects of all kinds of form properties should not be a absolute phenomena of an all-or-none. In this study we adopted 4 methods to research the form processing in the early phases simulatedly and systematically, including the tachistoscopic repetition, increasing time to present gradually, enlarging the visual angle gradually and non- tachistoscopic searching and naming. Under all kinds of bad or degraded visual conditions, the instantaneous course of early-phases processing was slowed down and postponed, and then the growth course was open to before our eyes. We can captured the characteristics of the form processing in the early phases by analyzing the reaction speed and recognition accuracy. Accompanying the visual angle and time increasing, the clarity improved and we can find out the relation between the effects of form properties and visual clarity improving. The results were as follows: ①in the early phases of visual perception of CC, there were the effects of all kinds of form properties. ②the quantity of the effects would cut down when the visual conditions were being changed better and better. We raised the concept of character's space transparency and it's algorithm to explain these effects of form properties. Furthermore, a model was discussed to help understand the phenomenon that the quantity of the effects changed as the visual conditions were improved. ③The early phases of visual perception of CC isn't the loci of the frequency effect.
Resumo:
Atualmente, no Brasil, a embalagem mais usada para tomate continua sendo a caixa de madeira que era usada para transportar querosene na Segunda Guerra Mundial, há meio século, conhecida por caixa ´K´. Os aspectos desejáveis da caixa 'K' incluem o fato de ser retornável e resistente. Os aspectos indesejáveis incluem o fato de possuir superfície áspera; alojar patógenos, funcionando como fonte de inóculo; aberturas laterais cortantes; profundidade excessiva, que comporta grande número de camadas de produtos; ser tampada. Essas características favorecem às injúrias mecânicas e comprometem a durabilidade e qualidade das hortaliças. Sabendo-se que as necessidades de proteção dos produtos vegetais são diferentes, torna-se necessário que as embalagens para protegê-los sejam específicas. Assim, o objetivo deste trabalho é desenvolver uma embalagem apropriada para tomate. O protótipo foi testado em relação à caixa 'K' e caixa de plástico já existente no mercado. Logo após a colheita os mesmos tratamentos foram deixados no sol ou na sombra, durante duas horas, para observar se influenciariam os frutos. As características avaliadas foram: variação de matéria fresca, aferida através de balança; vida útil, pelo período em que o vegetal esteve em perfeitas condições de ser comercializado; cor, pela escala com quatro classes para pimentão; variação da firmeza, medida por "push-pull"; teor relativo de água; deterioração, pelo número e peso de frutos deteriorados. Devido à grande influência dos danos mecânicos sobre as perdas pós-colheita, provavelmente este seja o fator mais importante na avaliação do protótipo. Houve diferença estatística entre os tratamentos, sendo que o protótipo apresentou as menores porcentagens de danos mecânicos, o que é desejável. Também houve diferença estatística para deterioração. Nas demais características, o protótipo não diferiu estatisticamente dos outros tratamentos.
Resumo:
A informação precisa e atualizada referente à produção agrícola brasileira é importante e estratégica, tanto do ponto de vista econômico como da segurança alimentar. O aumento da disponibilidade de imagens de sensoriamento remoto, bem como outros avanços tecnológicos recentes, como os sistemas de informação geográfica (SIGs) e os aparelhos de posicionamento global (GPS), podem facilitar a obtenção de estimativas de área plantada, um componente fundamental da previsão de safras, e permitir o acesso a uma informação essencial em aplicações ambientais: a localização espacial dos cultivos. Embora a previsão oficial de safras no Brasil ainda seja realizada de forma subjetiva, existe um esforço da Companhia Nacional de Abastecimento (CONAB) para o aperfeiçoamento metodológico do sistema de previsão, que inclui o Projeto GeoSafras. A presente pesquisa, que pretende contribuir para esse esforço, é baseada numa abordagem que associa a amostragem estatística, imagens de satélite, SIG e GPS, e que foi desenvolvida preliminarmente pela Embrapa Meio Ambiente para estimar, de forma objetiva e probabilística, a área plantada com determinada cultura em um Município.
Resumo:
Neste trabalho é analisada a distribuição espacial da cultura de café no Estado de São Paulo. O objetivo é subsidiar um plano de estratificação dos municípios produtores para fins de previsão de safra. Os métodos de análise espacial utilizados neste estudo são baseados nas estatísticas de Moran: índice global de associação espacial; índice local de associação espacial; e gráfico de espalhamento de Moran. Estes métodos permitem quantificar a dependência espacial de fenômenos geográficos e determinar a significância estatística dos agrupamentos espaciais. Foram utilizados dados do levantamento da Produção Agrícola Municipal - PAM - de 2004, produzidos pelo Instituto Brasileiro de Geografia e Estatística - IBGE. O programa TerraView foi utilizado no cálculo dos índices e para visualização dos resultados. Os resultados indicam que é importante considerar a localização geográfica dos municípios, pois a produção de café apresentou dependência espacial e foram identificados agrupamentos de municípios com índices de associação espacial significantes. No futuro, será possível comparar a distribuição espacial do café com a de outras culturas agrícolas em São Paulo ou com a da mesma cultura em outros Estados, o que auxiliará no planejamento da estratificação, fase inicial da estimativa de área plantada.
Resumo:
Type-omega DPLs (Denotational Proof Languages) are languages for proof presentation and search that offer strong soundness guarantees. LCF-type systems such as HOL offer similar guarantees, but their soundness relies heavily on static type systems. By contrast, DPLs ensure soundness dynamically, through their evaluation semantics; no type system is necessary. This is possible owing to a novel two-tier syntax that separates deductions from computations, and to the abstraction of assumption bases, which is factored into the semantics of the language and allows for sound evaluation. Every type-omega DPL properly contains a type-alpha DPL, which can be used to present proofs in a lucid and detailed form, exclusively in terms of primitive inference rules. Derived inference rules are expressed as user-defined methods, which are "proof recipes" that take arguments and dynamically perform appropriate deductions. Methods arise naturally via parametric abstraction over type-alpha proofs. In that light, the evaluation of a method call can be viewed as a computation that carries out a type-alpha deduction. The type-alpha proof "unwound" by such a method call is called the "certificate" of the call. Certificates can be checked by exceptionally simple type-alpha interpreters, and thus they are useful whenever we wish to minimize our trusted base. Methods are statically closed over lexical environments, but dynamically scoped over assumption bases. They can take other methods as arguments, they can iterate, and they can branch conditionally. These capabilities, in tandem with the bifurcated syntax of type-omega DPLs and their dynamic assumption-base semantics, allow the user to define methods in a style that is disciplined enough to ensure soundness yet fluid enough to permit succinct and perspicuous expression of arbitrarily sophisticated derived inference rules. We demonstrate every major feature of type-omega DPLs by defining and studying NDL-omega, a higher-order, lexically scoped, call-by-value type-omega DPL for classical zero-order natural deduction---a simple choice that allows us to focus on type-omega syntax and semantics rather than on the subtleties of the underlying logic. We start by illustrating how type-alpha DPLs naturally lead to type-omega DPLs by way of abstraction; present the formal syntax and semantics of NDL-omega; prove several results about it, including soundness; give numerous examples of methods; point out connections to the lambda-phi calculus, a very general framework for type-omega DPLs; introduce a notion of computational and deductive cost; define several instrumented interpreters for computing such costs and for generating certificates; explore the use of type-omega DPLs as general programming languages; show that DPLs do not have to be type-less by formulating a static Hindley-Milner polymorphic type system for NDL-omega; discuss some idiosyncrasies of type-omega DPLs such as the potential divergence of proof checking; and compare type-omega DPLs to other approaches to proof presentation and discovery. Finally, a complete implementation of NDL-omega in SML-NJ is given for users who want to run the examples and experiment with the language.
Resumo:
2005
Resumo:
Os principais desafios relacionados ao problema de classificação de enzimas em banco de dados de estruturas de proteínas são: 1) o ruído presente nos dados; 2) o grande número de variáveis; 3) o número não-balanceado de membros por classe. Para abordar esses desafios, apresenta-se uma metodologia para seleção de parâmetros, que combina recursos de matemática (ex: Transformada Discreta do Cosseno) e da estatística (ex:.g., correlação de variáveis e amostragem com reposição). A metodologia foi validada considerando-se os três principais métodos de classificação da literatura, a saber; árvore de decisão, classificação Bayesiana e redes neurais. Os experimentos demonstram que essa metodologia é simples, eficiente e alcança resultados semelhantes àqueles obtidos pelas principais técnicas para seleção de parâmetros na literatura.Termos para indexação classificação de enzimas,predição de função de proteínas, estruturas de proteínas, banco de dados de proteínas, seleção de parâmetros, métodos para classsificação de dados.
Resumo:
Especificação de requisitos. Modelagem de classes. Modelo de classes de objetos. Nível geral: diagrama de classes gerais. Modelo dinâmico. modelo funcional. Diagrama de fluxo de dados (DFD). Especificação de processos. Revisão de modelos analisados. Projeto orientado objeto. Organização do sistema. Projeto de objetos. Projeto de algoritmos. Diagrama: entidade-relacionamento. Interface gráfica com o usuário.
Resumo:
O método BLAST para determinação de similaridades entre sequências biológicas. Score e matrizes de substituição. Determinação de matrizes de substituição BLOSUM. Determinação de matrizes de substituição PAM. Resultados da teoria Estatística de comparação local de sequências. O Algoritmo usado por BLAST. NCBI-BLAST. Exemplo de busca.
Resumo:
Para muitos usuários, a programação visual é uma alternativa atrativa às linguagens de programação textuais. Uma das razões para esta atração é que a representação visual de um problema está muito mais próxima com a forma pela qual a solução é obtida ou entendida se comparada à representação textual. Este trabalho apresenta um modelo para a programação visual de matrizes baseado nos paradigmas de fluxo de dados e planilhas eletrônicas. O fluxo de dados e a planilha forma a base semântica da linguagem, enquanto as representações gráficas do grafo direcionado e de uma planilha fundamentam sua base sintática. Este modelo consiste em um conjunto de diagramas bidimensionais e de regras de transformação. Os processos são implementados como redes de fluxo de dados e os dados são representados por planilhas. As planilhas podem ser vistas como variáveis do tipo matriz que armazenam dados bidimensionais, ou como funções, que recebem e produzem valores utilizados por outros processos. Neste caso, as planilhas são programadas seguindo o paradigma de programação por demonstrações que incorporam um poderoso construtor de interação, reduzindo significativamente a utilização de recursos e repetições. O modelo proposto pode ser utilizado em diversos domínios de aplicação, principalmente para simplificar a construção de modelos matemáticos de simulação e análise estatística.
Resumo:
O objetivo deste trabalho é comparar o interpolador univariado: inverso do quadrado da distância usado pelo Sistema Agritempo com três interpoladores geoestatísticos uni e multivariados: krigagem ordinária, cokrigagem ordinária e cokrigagem colocalizada através da estatística do quadrado médio do erro, usando observações de precipitação pluvial média anual de mil e vinte e sete estações pluviométricas abrangendo todo o Estado de São Paulo, representando uma área de aproximadamente 248.808,8 km², no período de 1957 a 1997.
Resumo:
A presente aplicacao da funcao discriminante linear na classificacao de fatores que determinam o exodo rural foi realizada a partir dos dados coletados em pequenas propriedades na regiao de Ouricuri, no alto sertao de Pernambuco. Dentro do complexo de problemas que condicionam o exodo rural foi escolhido um conjunto de variaveis que explicariam o fenomeno. Estas variaveis foram estudadas em duas populacoes (com exodo rural e sem exodo), para verificar em que grau se explica esse fenomeno. O objetivo do trabalho consistiu em testar instrumentos tecnicos e metodos estatisticos sobre problemas socio-economicos que possam ser usados posteriormente pelos orgaos de desenvolvimento.
Resumo:
2013