10 resultados para Process-based model

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OSAN, R. , TORT, A. B. L. , AMARAL, O. B. . A mismatch-based model for memory reconsolidation and extinction in attractor networks. Plos One, v. 6, p. e23113, 2011.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OSAN, R. , TORT, A. B. L. , AMARAL, O. B. . A mismatch-based model for memory reconsolidation and extinction in attractor networks. Plos One, v. 6, p. e23113, 2011.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates the cognitive processes that operate in understanding narratives in this case, the novel Macunaíma, by Mário de Andrade. Our work belongs to the field of Embodied-based Cognitive Linguistics and, due to its interdisciplinary nature, it dialogues with theoretical and methodological frameworks of Psycholinguistics, Cognitive Psychology and Neurosciences. Therefore, we adopt an exploratory research design, recall and cloze tests, adapted, with postgraduation students, all native speakers of Brazilian Portuguese. The choice of Macunaíma as the novel and initial motivation for this proposal is due to the fact it is a fantastic narrative, which consists of events, circumstances and characters that are clearly distant types from what is experienced in everyday life. Thus, the novel provides adequate data to investigate the configuration of meaning, within an understanding-based model. We, therefore, seek, to answer questions that are still, generally, scarcely explored in the field of Cognitive Linguistics, such as to what extent is the activation of mental models (schemas and frames) related to the process of understanding narratives? How are we able to build sense even when words or phrases are not part of our linguistic repertoire? Why do we get emotionally involved when reading a text, even though it is fiction? To answer them, we assume the theoretical stance that meaning is not in the text, it is constructed through language, conceived as a result of the integration between the biological (which results in creating abstract imagery schemes) and the sociocultural (resulting in creating frames) apparatus. In this sense, perception, cognitive processing, reception and transmission of the information described are directly related to how language comprehension occurs. We believe that the results found in our study may contribute to the cognitive studies of language and to the development of language learning and teaching methodologies

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Model-oriented strategies have been used to facilitate products customization in the software products lines (SPL) context and to generate the source code of these derived products through variability management. Most of these strategies use an UML (Unified Modeling Language)-based model specification. Despite its wide application, the UML-based model specification has some limitations such as the fact that it is essentially graphic, presents deficiencies regarding the precise description of the system architecture semantic representation, and generates a large model, thus hampering the visualization and comprehension of the system elements. In contrast, architecture description languages (ADLs) provide graphic and textual support for the structural representation of architectural elements, their constraints and interactions. This thesis introduces ArchSPL-MDD, a model-driven strategy in which models are specified and configured by using the LightPL-ACME ADL. Such strategy is associated to a generic process with systematic activities that enable to automatically generate customized source code from the product model. ArchSPLMDD strategy integrates aspect-oriented software development (AOSD), modeldriven development (MDD) and SPL, thus enabling the explicit modeling as well as the modularization of variabilities and crosscutting concerns. The process is instantiated by the ArchSPL-MDD tool, which supports the specification of domain models (the focus of the development) in LightPL-ACME. The ArchSPL-MDD uses the Ginga Digital TV middleware as case study. In order to evaluate the efficiency, applicability, expressiveness, and complexity of the ArchSPL-MDD strategy, a controlled experiment was carried out in order to evaluate and compare the ArchSPL-MDD tool with the GingaForAll tool, which instantiates the process that is part of the GingaForAll UML-based strategy. Both tools were used for configuring the products of Ginga SPL and generating the product source code

Relevância:

90.00% 90.00%

Publicador:

Resumo:

RePART (Reward/Punishment ART) is a neural model that constitutes a variation of the Fuzzy Artmap model. This network was proposed in order to minimize the inherent problems in the Artmap-based model, such as the proliferation of categories and misclassification. RePART makes use of additional mechanisms, such as an instance counting parameter, a reward/punishment process and a variable vigilance parameter. The instance counting parameter, for instance, aims to minimize the misclassification problem, which is a consequence of the sensitivity to the noises, frequently presents in Artmap-based models. On the other hand, the use of the variable vigilance parameter tries to smoouth out the category proliferation problem, which is inherent of Artmap-based models, decreasing the complexity of the net. RePART was originally proposed in order to minimize the aforementioned problems and it was shown to have better performance (higer accuracy and lower complexity) than Artmap-based models. This work proposes an investigation of the performance of the RePART model in classifier ensembles. Different sizes, learning strategies and structures will be used in this investigation. As a result of this investigation, it is aimed to define the main advantages and drawbacks of this model, when used as a component in classifier ensembles. This can provide a broader foundation for the use of RePART in other pattern recognition applications

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work proposes a model based approach for pointcut management in the presence of evolution in aspect oriented systems. The proposed approach, called conceptual visions based pointcuts, is motivated by the observation of the shortcomings in traditional approaches pointcuts definition, which generally refer directly to software structure and/or behavior, thereby creating a strong coupling between pointcut definition and the base code. This coupling causes the problem known as pointcut fragility problem and hinders the evolution of aspect-oriented systems. This problem occurs when all the pointcuts of each aspect should be reviewed due to any software changes/evolution, to ensure that they remain valid even after the changes made in the software. Our approach is focused on the pointcuts definition based on a conceptual model, which has definitions of the system's structure in a more abstract level. The conceptual model consists of classifications (called conceptual views) on entities of the business model elements based on common characteristics, and relationships between these views. Thus the pointcuts definitions are created based on the conceptual model rather than directly referencing the base model. Moreover, the conceptual model contains a set of relationships that allows it to be automatically verified if the classifications in the conceptual model remain valid even after a software change. To this end, all the development using the conceptual views based pointcuts approach is supported by a conceptual framework called CrossMDA2 and a development process based on MDA, both also proposed in this work. As proof of concept, we present two versions of a case study, setting up a scenario of evolution that shows how the use of conceptual visions based pointcuts helps detecting and minimizing the pointcuts fragility. For the proposal evaluation the Goal/Question/Metric (GQM) technique is used together with metrics for efficiency analysis in the pointcuts definition

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A great challenge of the Component Based Development is the creation of mechanisms to facilitate the finding of reusable assets that fulfill the requirements of a particular system under development. In this sense, some component repositories have been proposed in order to answer such a need. However, repositories need to represent the asset characteristics that can be taken into account by the consumers when choosing the more adequate assets for their needs. In such a context, the literature presents some models proposed to describe the asset characteristics, such as identification, classification, non-functional requirements, usage and deployment information and component interfaces. Nevertheless, the set of characteristics represented by those models is insufficient to describe information used before, during and after the asset acquisition. This information refers to negotiation, certification, change history, adopted development process, events, exceptions and so on. In order to overcome this gap, this work proposes an XML-based model to represent several characteristics, of different asset types, that may be employed in the component-based development. Besides representing metadata used by consumers, useful for asset discovering, acquisition and usage, this model, called X-ARM, also focus on helping asset developers activities. Since the proposed model represents an expressive amount of information, this work also presents a tool called X-Packager, developed with the goal of helping asset description with X-ARM

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Advanced Oxidation Processes (AOP) are techniques involving the formation of hydroxyl radical (HO•) with high organic matter oxidation rate. These processes application in industry have been increasing due to their capacity of degrading recalcitrant substances that cannot be completely removed by traditional processes of effluent treatment. In the present work, phenol degrading by photo-Fenton process based on addition of H2O2, Fe2+ and luminous radiation was studied. An experimental design was developed to analyze the effect of phenol, H2O2 and Fe2+ concentration on the fraction of total organic carbon (TOC) degraded. The experiments were performed in a batch photochemical parabolic reactor with 1.5 L of capacity. Samples of the reactional medium were collected at different reaction times and analyzed in a TOC measurement instrument from Shimadzu (TOC-VWP). The results showed a negative effect of phenol concentration and a positive effect of the two other variables in the TOC degraded fraction. A statistical analysis of the experimental design showed that the hydrogen peroxide concentration was the most influent variable in the TOC degraded fraction at 45 minutes and generated a model with R² = 0.82, which predicted the experimental data with low precision. The Visual Basic for Application (VBA) tool was used to generate a neural networks model and a photochemical database. The aforementioned model presented R² = 0.96 and precisely predicted the response data used for testing. The results found indicate the possible application of the developed tool for industry, mainly for its simplicity, low cost and easy access to the program.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Advanced Oxidation Processes (AOP) are techniques involving the formation of hydroxyl radical (HO•) with high organic matter oxidation rate. These processes application in industry have been increasing due to their capacity of degrading recalcitrant substances that cannot be completely removed by traditional processes of effluent treatment. In the present work, phenol degrading by photo-Fenton process based on addition of H2O2, Fe2+ and luminous radiation was studied. An experimental design was developed to analyze the effect of phenol, H2O2 and Fe2+ concentration on the fraction of total organic carbon (TOC) degraded. The experiments were performed in a batch photochemical parabolic reactor with 1.5 L of capacity. Samples of the reactional medium were collected at different reaction times and analyzed in a TOC measurement instrument from Shimadzu (TOC-VWP). The results showed a negative effect of phenol concentration and a positive effect of the two other variables in the TOC degraded fraction. A statistical analysis of the experimental design showed that the hydrogen peroxide concentration was the most influent variable in the TOC degraded fraction at 45 minutes and generated a model with R² = 0.82, which predicted the experimental data with low precision. The Visual Basic for Application (VBA) tool was used to generate a neural networks model and a photochemical database. The aforementioned model presented R² = 0.96 and precisely predicted the response data used for testing. The results found indicate the possible application of the developed tool for industry, mainly for its simplicity, low cost and easy access to the program.