141 resultados para depressogenic schemas
Resumo:
Our research has as goal to describe and analyze the main processes related to the activation of conceptual domains underlying the comprehension in the discourse pattern cartoons by the students of third grades of high school, at Professor Antonio Basílio Filho School. Theoretically, we are grounded on assumptions of Conceptual Linguistics, whose interest analyzes our cognitive apparatus in correlation with our socio-cultural and bodies experiences. We intend to check how is the process of meaning construction and integration of various cognitive domains that are activated during the reading activity. That s why, we take the concept of cognitive domains as equivalent to the structures that are stored in our memory from our sociocultural and corporeal experiences and they are stabilized, respectively, through the frames and schemas. The activation of these conceptual domains, as evidenced by our data, supports the assumption that previous knowledge from our inclusion in specific sociocultural contexts, concurrently with the functioning of our sensory-motor system are essential during the construction activity direction. With this research, we still intend to present a proposal confront the expectations of responses produced by students from the activation of frames and schemas with our predictions
Resumo:
A significant set of information stored in different databases around the world, can be shared through peer-topeer databases. With that, is obtained a large base of knowledge, without the need for large investments because they are used existing databases, as well as the infrastructure in place. However, the structural characteristics of peer-topeer, makes complex the process of finding such information. On the other side, these databases are often heterogeneous in their schemas, but semantically similar in their content. A good peer-to-peer databases systems should allow the user access information from databases scattered across the network and receive only the information really relate to your topic of interest. This paper proposes to use ontologies in peer-to-peer database queries to represent the semantics inherent to the data. The main contribution of this work is enable integration between heterogeneous databases, improve the performance of such queries and use the algorithm of optimization Ant Colony to solve the problem of locating information on peer-to-peer networks, which presents an improve of 18% in results. © 2011 IEEE.
Resumo:
Pós-graduação em Linguística e Língua Portuguesa - FCLAR
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Educação para a Ciência - FC
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Linguística e Língua Portuguesa - FCLAR
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This study aimed to verify the effects of a metatextual intervention program, in the elaboration of stories written by students with learning difficulties. Four students were included in the sample of both genders, with ages ranging between eight years and four months and ten years and two months of age. The program was implemented at the participant schools, using an approach of multiple baseline within-subjects, with two conditions: baseline and intervention. Data analysis was based on the classification of stories produced by the students. Mann-Whitney testing was also applied, to analyze whether there have been significant changes in these productions. The results indicated that all students have improved performance in relation to the categories of produced stories, from elementary schemas (33%), for a more elaborate scheme (77%), with a better structuring of the elements that constitute a story. Statistical analysis also showed that the intervention has produced significant results for all variables analyzed. The data obtained have shown that the program was effective.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
[ES]El objetivo de este Trabajo es el de actualizar un entorno de gestión de bases de datos existente a la versión 11.2 del software de bases de datos Oracle y a una plataforma hardware de última generación. Se migran con tiempo de parada cero varias bases de datos dispersas en distintos servidores a un entorno consolidado de dos nodos dispuestos en alta disponibilidad tipo "activo-activo" mediante Oracle RAC y respaldado por un entorno de contingencia totalmente independiente y sincronizado en tiempo real mediante Oracle GoldenGate. Se realiza un estudio del entorno actual y, realizando una estimación de crecimiento, se propone una configuración de hardware y software mínima para implementar con garantías de éxito los requerimientos del entorno de gestión de bases de datos a corto y medio plazo. Una vez adquirido el hardware, se lleva a cabo la instalación, actualización y configuración del Sistema Operativo y el acceso redundado de los servidores a la cabina de almacenamiento. Posteriormente se instala el software de clúster de Oracle, el software de la base de datos y se crea una instancia que albergará los esquemas requeridos de las bases de datos a consolidar. Seguidamente se migran los esquemas al entorno consolidado y se establece la replicación de éstos en tiempo real con la máquina de contingencia usando en ambos casos Oracle GoldenGate. Finalmente se crea y prueba un esquema de copias de seguridad que incluye copias lógicas y físicas de la propia base de datos y de archivos de configuración del clúster a partir de los cuales será posible restaurar el entorno completamente.
Resumo:
Die Nichtlineare Zeitreihenanalyse konnte in den letzten Jahren im Rahmen von Laborexperimenten ihre prinzipelle Brauchbarkeit beweisen. Allerdings handelte es sich in der Regel um ausgewählte oder speziell konstruierte nichtlineare Systeme. Sieht man einmal von der Überwachung von Prozessen und Produkten ab, so sind Anwendungen auf konkrete, vorgegebene dynamische Probleme im industriellen Bereich kaum bekannt geworden. Ziel dieser Arbeit war es, an Hand von zwei Problemen aus der technischen Praxis zu untersuchen, ob die Anwendung des kanonischen Schemas der Nichtlinearen Zeitreihenanalyse auch dort zu brauchbaren Resultaten führt oder ob Modifikationen (Vereinfachungen oder Erweiterungen) notwendig werden. Am Beispiel der Herstellung von optischen Oberflächen durch Hochpräzisionsdrehbearbeitung konnte gezeigt werden, daß eine aktive Störungskompensation in Echtzeit mit einem speziell entwickelten nichtlinearen Vorhersagealgorithmus möglich ist. Standardverfahren der Nichtlinearen Zeitreihenanalyse beschreiten hier den allgemeinen, aber sehr aufwendigen Weg über eine möglichst vollständige Phasenraumrekonstruktion. Das neue Verfahren verzichtet auf viele der kanonischen Zwischenschritte. Dies führt zu einererheblichen Rechenzeitersparnis und zusätzlich zu einer wesentlich höheren Stabilität gegenüber additivem Meßrauschen. Mit den berechneten Vorhersagen der unerwünschten Maschinenschwingungen wurde eine Störungskompensation realisiert, die die Oberflächengüte des bearbeiteten Werkstücks um 20-30% verbesserte. Das zweite Beispiel betrifft die Klassifikation von Körperschallsignalen, die zur Überwachung von Zerspansprozessen gemessen werden. Diese Signale zeigen, wie auch viele andere Prozesse im Bereich der Produktion, ein hochgradig nichtstationäres Verhalten. Hier versagen die Standardverfahren der Nichtlinearen Datenanalyse, die FT- bzw. AAFT-Surrogate benutzen. Daher wurde eine neue Klasse von Surrogatdaten zum Testen der Nullhypothese nichtstationärer linearer stochastischer Prozesse entwickelt, die in der Lage ist, zwischen deterministischen nichtlinear chaotischen und stochastischen linearen nichtstationären Zeitreihen mit change points zu unterscheiden. Damit konnte gezeigt werden, daß die untersuchten Köperschallsignale sich statistisch signifikant einer nichtstationären stochastischen Folge von einfachen linearen Prozessen zuordnen lassen und eine Interpretation als nichtlineare chaotische Zeitreihe nicht erforderlich ist.
Resumo:
Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.