47 resultados para Migrazione database DBMS processo software dati
Resumo:
The aim of the present study was to extract vegetable oil from brown linseed (Linum usitatissimum L.), determine fatty acid levels, the antioxidant capacity of the extracted oil and perform a rapid economic assessment of the SFE process in the manufacture of oil. The experiments were conducted in a test bench extractor capable of operating with carbon dioxide and co-solvents, obeying 23 factorial planning with central point in triplicate, and having process yield as response variable and pressure, temperature and percentage of cosolvent as independent variables. The yield (mass of extracted oil/mass of raw material used) ranged from 2.2% to 28.8%, with the best results obtained at 250 bar and 50ºC, using 5% (v/v) ethanol co-solvent. The influence of the variables on extraction kinetics and on the composition of the linseed oil obtained was investigated. The extraction kinetic curves obtained were based on different mathematical models available in the literature. The Martínez et al. (2003) model and the Simple Single Plate (SSP) model discussed by Gaspar et al. (2003) represented the experimental data with the lowest mean square errors (MSE). A manufacturing cost of US$17.85/kgoil was estimated for the production of linseed oil using TECANALYSIS software and the Rosa and Meireles method (2005). To establish comparisons with SFE, conventional extraction tests were conducted with a Soxhlet device using petroleum ether. These tests obtained mean yields of 35.2% for an extraction time of 5h. All the oil samples were sterilized and characterized in terms of their composition in fatty acids (FA) using gas chromatography. The main fatty acids detected were: palmitic (C16:0), stearic (C18:0), oleic (C18:1), linoleic (C18:2n-6) and α-linolenic (C18:3n-3). The FA contents obtained with Soxhlet dif ered from those obtained with SFE, with higher percentages of saturated and monounsaturated FA with the Soxhlet technique using petroleum ether. With respect to α-linolenic content (main component of linseed oil) in the samples, SFE performed better than Soxhlet extraction, obtaining percentages between 51.18% and 52.71%, whereas with Soxhlet extraction it was 47.84%. The antioxidant activity of the oil was assessed in the β-carotene/linoleic acid system. The percentages of inhibition of the oxidative process reached 22.11% for the SFE oil, but only 6.09% for commercial oil (cold pressing), suggesting that the SFE technique better preserves the phenolic compounds present in the seed, which are likely responsible for the antioxidant nature of the oil. In vitro tests with the sample displaying the best antioxidant response were conducted in rat liver homogenate to investigate the inhibition of spontaneous lipid peroxidation or autooxidation of biological tissue. Linseed oil proved to be more efficient than fish oil (used as standard) in decreasing lipid peroxidation in the liver tissue of Wistar rats, yielding similar results to those obtained with the use of BHT (synthetic antioxidant). Inhibitory capacity may be explained by the presence of phenolic compounds with antioxidant activity in the linseed oil. The results obtained indicate the need for more detailed studies, given the importance of linseed oil as one of the greatest sources of ω3 among vegetable oils
Resumo:
The present study aims to analyze the potentialities and limitations of GeoGebra software on what concerns trigonometry s teaching and learning processes. Taking the present resources of public school from the state of Rio Grande do Norte, the research intends to answer the following question: Could we use the current conditions of public school and the Geogebra software to optimize the trigonometry s learning and teaching processes situation? . To make it a possible to answer the question above, a module of investigative activities was created and applied. The methodological intervention was made among second year High School students from a public school in Natal, RN. The theoretical reference of Mathematics Didactics was taken was a base, adopting the conceptions of Borba and Penteado (2001), Valente (1999) and Zulatto (2002, 2007) about the use of Information Technology (IT) on Mathematics classrooms. In order to create the investigative activities helped us to understand how the students make their constructions and their visual perception through the process of dragging images on the computer screen. Furthermore, the activities done with the GeoGebra software s resources facilitate the resolution of trigonometry situations
Resumo:
The infographics historically experience the process of evolution of journalism, from the incipient models handmade in the eighteenth century to the inclusion of computers and sophisticated software today. In order to face the advent of TV against of the partiality readers of the printed newspaper, or to represent the Gulf War, where not allowed photography, infographics reaches modern levels of production and publication. The technical devices which enabled the infographics to evolve the environment of the internet, with conditions for the manipulation of the reader, incorporating video, audio and animations, so styling of interactive infographics. These digital models of information visualization recently arrived daily in the northeast and on their respective web sites with features regionalized. This paper therefore proposes to explore and describe the processes of producing the interactive infographics, taking the example of the Diário do Nordeste, Fortaleza, Ceará, whose department was created one year ago. Therefore, based on aspects that guide the theory of journalism, as newsmaking, filters that focus on productive routine (gatekeeping) and the construction stages of the news. This research also draws on the theoretical framework on the subject, in concepts essential characteristics of computer graphics, as well as the methodological procedures and systematic empirical observations in production routines of the newsroom who can testify limitations and / or advances
Resumo:
Introduction: Falls among older adults is a public health problem, therefore it is necessary preventive actions, however the adherence is the major problem faced by practitioners and researchers working on falls prevention programs. Objective: To evaluate the variables related to the adherence to falls prevention programs among the elderly enrolled in a Basic Health Unit (BHU). Methods: Was performed an observational cross-sectional analytical study. All elderly registered in a BHU and able to ambulate independently were invited to participate in a falls prevent program. The Elderly who Adhered to the Program (EAP) were evaluated at BHU; and the Elderly Not Adhered to the Program (ENAP) were identified and assessed at home. The assessment for both groups was performed using an evaluation form containing personal data, measures and clinical scales to assess cognitive status, balance, mobility, fear of falling, handgrip strength. Data were analyzed with SPSS 20.0. In addition to this assessment, the ENAP underwent a semi structured interview, in which we used the qualitative approach based on the figure of the Collective Subject Discourse. Results: The study included 222 elderly, 111 EAP and 111ENAP, most aged between 70 and 79 years (48.2%), female (68.5%), married (52.3%) and illiterate (47.7%). Consolidated as protective factors for adherence, worst rates of physical activity (p = 0.001), balance (p = 0.010) and cognition (p = 0.007). The interview of ENAP identified two themes: "Local implementation of programs for the prevention of falls" and "Relationship between BHU and the elderly health care," and found that the elderly who did not adhere were unable to displace and did not mention that primary care programs are related to health care in elderly. Conclusions: Elderly who do not adhere to the program differ from elderly who adhere as worst indices of cognition, balance and physical activity which implies greater risk of falling; and they were unable to participate in falls prevention program and by to be caregiver and showed displacement difficult
Resumo:
The northeastern region is responsible to 14.32% of sugarcane national production. This lowered contribution is due to edaphoclimatic condition. Flowering is a vital process to plant which consumes lots of energy and it culminates in a process called isoporization. This one can give in a decreasing of 60% on alcohol and water production. It may consider that cropped sugarcane has a hibrid with octaploid genome, there are varieties with a flowering standard until of non flowering. Using this natural genetic potential on different croppings of sugarcane, the aim of this work was to understand as this process occurs by the usage of subtractive approaches. The total RNA was extracted using Trizol of peaks of merisematics of croppings with induced flowering and other with late flowering. From this total RNA were built four subtractives libraries (B1- induced early flowering subtracted on late flowering not induced; B2- late flowering not induced subtracted induced early flowering; B3- induced early flowering subtracted of not induced early flowering; B02- not induced early flowering subtracted from induced early flowering) using kits Super Smart cDNA synthesis and BD Clontech kit select cDNA subtraction (Clontech). This material was clone don vector pGEM T-easy(Promega) and changed in competent cells of E.coli DH10B. Given analysis sequence was carried out a program BLASTn against database of NCBI and genome of Arabidopsis thaliana, rice and maize. Clones were grouped in 9 different classes according to function. Some factors already related as couples of flower induction were identified at different libraries. And grouped proteins with cell cycle and it controls were presents, mainly kinases proteins. Related factors to proteic sinthesis, metabolism, defence, cell communication were also given in both libraries .Some identified genes did not show similarity on database or homology with hypothesis function, and it can represents new genes to be deposited in international database. These results offers that some identified on sugarcane, classified as on factors classes, cell cycle and cell communication, trough unknown genes, can be linked with genetic changing to the flowering process found in the northeastern region
Resumo:
Currently, several psychological and non-psychological tests can be found in publishes without standardization on procedures set in different psychological areas, like intelligence, emotional states, attitudes, social skills, vocation, preferences and others. The computerized psychological testing is a extension of traditional testing psychological practices. However, it has own psychometrics qualities, either by its matching in a computerized environment or by the extension that can be developed in it. The current research, developed from a necessity to study process of validity and reliability on a computerized test, drew a methodological structure to provide parallel applications in numerous kinds of operational groups, evaluating the influences of the time and approach in the computerization process. This validity refers to normative values groups, reproducibility in computerized applications process and data processing. Not every psychological test can be computerized. Therefore, our need to find a good test, with quality and plausible properties to transform in computerized application, leaded us to use The Millon Personality Inventory, created by Theodore Millon. This Inventory assesses personality according to 12 bipolarities distributed in 24 factors, distributed in categories motivational styles, cognitive targets and interpersonal relations. This instrument doesn t diagnose pathological features, but test normal and non adaptive aspects in human personality, comparing with Theodore Millon theory of personality. In oder to support this research in a Brazilian context in psychological testing, we discuss the theme, evaluating the advantages and disadvantages of such practices. Also we discuss the current forms in computerization of psychological testing and the main specific criteria in this psychometric specialized area of knowledge. The test was on-line, hosted in the site http://www.planetapsi.com, during the years of 2007 and 2008, which was available a questionnaire to describe social characteristics before test. A report was generated from the data entry of each user. An application of this test was conducted in a linear way through a national coverage in all Brazil regions, getting 1508 applications. Were organized nine groups, reaching 180 applications in test and retest subject, where three periods of time and three forms of retests for studies of on-line tests were separated. Parallel to this, we organized multi-application session offline group, 20 subjects who received tests by email. The subjects of this study were generally distributed by the five Brazilian regions, and were noticed about the test via the Internet. The performance application in traditional and on-line tested groups subsidies us to conclude that on-line application provides significantly consistency in all criteria for validity studied and justifies its use. The on-line test results were related not only among themselves but were similar to those data of tests done on pencil and paper (0,82). The retests results demonstrated correlation, between 0,92 and, 1 while multisessions had a good correlation in these comparisons. Moreover, were assessed the adequacy of operational criteria used, such as security, the performance of users, the environmental characteristics, the organization of the database, operational costs and limitations in this on-line inventory. In all these five items, there were excellent performances, concluding, also, that it s possible a self-applied psychometric test. The results of this work are a guide to question and establish of methodologies studies for computerization psychological testing software in the country
Resumo:
Model-oriented strategies have been used to facilitate products customization in the software products lines (SPL) context and to generate the source code of these derived products through variability management. Most of these strategies use an UML (Unified Modeling Language)-based model specification. Despite its wide application, the UML-based model specification has some limitations such as the fact that it is essentially graphic, presents deficiencies regarding the precise description of the system architecture semantic representation, and generates a large model, thus hampering the visualization and comprehension of the system elements. In contrast, architecture description languages (ADLs) provide graphic and textual support for the structural representation of architectural elements, their constraints and interactions. This thesis introduces ArchSPL-MDD, a model-driven strategy in which models are specified and configured by using the LightPL-ACME ADL. Such strategy is associated to a generic process with systematic activities that enable to automatically generate customized source code from the product model. ArchSPLMDD strategy integrates aspect-oriented software development (AOSD), modeldriven development (MDD) and SPL, thus enabling the explicit modeling as well as the modularization of variabilities and crosscutting concerns. The process is instantiated by the ArchSPL-MDD tool, which supports the specification of domain models (the focus of the development) in LightPL-ACME. The ArchSPL-MDD uses the Ginga Digital TV middleware as case study. In order to evaluate the efficiency, applicability, expressiveness, and complexity of the ArchSPL-MDD strategy, a controlled experiment was carried out in order to evaluate and compare the ArchSPL-MDD tool with the GingaForAll tool, which instantiates the process that is part of the GingaForAll UML-based strategy. Both tools were used for configuring the products of Ginga SPL and generating the product source code
Resumo:
Nowadays, the importance of using software processes is already consolidated and is considered fundamental to the success of software development projects. Large and medium software projects demand the definition and continuous improvement of software processes in order to promote the productive development of high-quality software. Customizing and evolving existing software processes to address the variety of scenarios, technologies, culture and scale is a recurrent challenge required by the software industry. It involves the adaptation of software process models for the reality of their projects. Besides, it must also promote the reuse of past experiences in the definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. This work aimed to explore the use and adaptation of consolidated software product lines techniques to promote the management of the variabilities of software process families. In order to achieve this aim: (i) a systematic literature review is conducted to identify and characterize variability management approaches for software processes; (ii) an annotative approach for the variability management of software process lines is proposed and developed; and finally (iii) empirical studies and a controlled experiment assess and compare the proposed annotative approach against a compositional one. One study a comparative qualitative study analyzed the annotative and compositional approaches from different perspectives, such as: modularity, traceability, error detection, granularity, uniformity, adoption, and systematic variability management. Another study a comparative quantitative study has considered internal attributes of the specification of software process lines, such as modularity, size and complexity. Finally, the last study a controlled experiment evaluated the effort to use and the understandability of the investigated approaches when modeling and evolving specifications of software process lines. The studies bring evidences of several benefits of the annotative approach, and the potential of integration with the compositional approach, to assist the variability management of software process lines
Resumo:
Software Product Line (SPL) consists of a software development paradigm, whose main focus is to identify features common and variability among applications in a specific domain. An LPS is designed to attend all products requirements from its product family. These requirements and LPS may have changes over time due to several factors, such as evolution of product requirements, evolution of the market, evolution of SLP process, evolution of the technologies used to develop the products. To handle these changes, LPS should be modified and evolve in order to not become obsolete, and adapt itself to new requirements. The Changes Impact Analysis is an activity that understand and identify what consequences these changes are cause on LPS. Impact Analysis on LPS may be supported by traceability relationships, which identify relationships between artefacts created during all phases of software development. Despite the solutions of change impact analysis based on traceability for software, there is a lack of solutions for assessing the change impact analysis based on traceability for LPS, since existing solutions do not include estimates specific to the artefacts of LPS. Thus, this paper proposes a process of change impact analysis and an tool for assessing the change impact through traceability of artefacts in LPS. For this purpose, we specified a process of change impact analysis that considers artifacts produced during the development of LPS. We have also implemented a tool which allows estimating and identifying artefacts and products of LPS affected from changes in other products, changes in class, changes in features, changes between releases of LPS and artefacts related to changes in core assets and variability. Finally, the results were evaluated through metrics
Resumo:
This dissertation presents a model-driven and integrated approach to variability management, customization and execution of software processes. Our approach is founded on the principles and techniques of software product lines and model-driven engineering. Model-driven engineering provides support to the specification of software processes and their transformation to workflow specifications. Software product lines techniques allows the automatic variability management of process elements and fragments. Additionally, in our approach, workflow technologies enable the process execution in workflow engines. In order to evaluate the approach feasibility, we have implemented it using existing model-driven engineering technologies. The software processes are specified using Eclipse Process Framework (EPF). The automatic variability management of software processes has been implemented as an extension of an existing product derivation tool. Finally, ATL and Acceleo transformation languages are adopted to transform EPF process to jPDL workflow language specifications in order to enable the deployment and execution of software processes in the JBoss BPM workflow engine. The approach is evaluated through the modeling and modularization of the project management discipline of the Open Unified Process (OpenUP)
Resumo:
Through the adoption of the software product line (SPL) approach, several benefits are achieved when compared to the conventional development processes that are based on creating a single software system at a time. The process of developing a SPL differs from traditional software construction, since it has two essential phases: the domain engineering - when common and variables elements of the SPL are defined and implemented; and the application engineering - when one or more applications (specific products) are derived from the reuse of artifacts created in the domain engineering. The test activity is also fundamental and aims to detect defects in the artifacts produced in SPL development. However, the characteristics of an SPL bring new challenges to this activity that must be considered. Several approaches have been recently proposed for the testing process of product lines, but they have been shown limited and have only provided general guidelines. In addition, there is also a lack of tools to support the variability management and customization of automated case tests for SPLs. In this context, this dissertation has the goal of proposing a systematic approach to software product line testing. The approach offers: (i) automated SPL test strategies to be applied in the domain and application engineering, (ii) explicit guidelines to support the implementation and reuse of automated test cases at the unit, integration and system levels in domain and application engineering; and (iii) tooling support for automating the variability management and customization of test cases. The approach is evaluated through its application in a software product line for web systems. The results of this work have shown that the proposed approach can help the developers to deal with the challenges imposed by the characteristics of SPLs during the testing process
Resumo:
The tracking between models of the requirements and architecture activities is a strategy that aims to prevent loss of information, reducing the gap between these two initial activities of the software life cycle. In the context of Software Product Lines (SPL), it is important to have this support, which allows the correspondence between this two activities, with management of variability. In order to address this issue, this paper presents a process of bidirectional mapping, defining transformation rules between elements of a goaloriented requirements model (described in PL-AOVgraph) and elements of an architectural description (defined in PL-AspectualACME). These mapping rules are evaluated using a case study: the GingaForAll LPS. To automate this transformation, we developed the MaRiPLA tool (Mapping Requirements to Product Line Architecture), through MDD techniques (Modeldriven Development), including Atlas Transformation Language (ATL) with specification of Ecore metamodels jointly with Xtext , a DSL definition framework, and Acceleo, a code generation tool, in Eclipse environment. Finally, the generated models are evaluated based on quality attributes such as variability, derivability, reusability, correctness, traceability, completeness, evolvability and maintainability, extracted from the CAFÉ Quality Model
Resumo:
The approach Software Product Line (SPL) has become very promising these days, since it allows the production of customized systems on large scale through product families. For the modeling of these families the Features Model is being widely used, however, it is a model that has low level of detail and not may be sufficient to guide the development team of LPS. Thus, it is recommended add the Features Model to other models representing the system from other perspectives. The goals model PL-AOVgraph can assume this role complementary to the Features Model, since it has a to context oriented language of LPS's, which allows the requirements modeling in detail and identification of crosscutting concerns that may arise as result of variability. In order to insert PL-AOVgraph in development of LPS's, this paper proposes a bi-directional mapping between PL-AOVgraph and Features Model, which will be automated by tool ReqSys-MDD. This tool uses the approach of Model-Driven Development (MDD), which allows the construction of systems from high level models through successive transformations. This enables the integration of ReqSys-MDD with other tools MDD that use their output models as input to other transformations. So it is possible keep consistency among the models involved, avoiding loss of informations on transitions between stages of development
Resumo:
A great challenge of the Component Based Development is the creation of mechanisms to facilitate the finding of reusable assets that fulfill the requirements of a particular system under development. In this sense, some component repositories have been proposed in order to answer such a need. However, repositories need to represent the asset characteristics that can be taken into account by the consumers when choosing the more adequate assets for their needs. In such a context, the literature presents some models proposed to describe the asset characteristics, such as identification, classification, non-functional requirements, usage and deployment information and component interfaces. Nevertheless, the set of characteristics represented by those models is insufficient to describe information used before, during and after the asset acquisition. This information refers to negotiation, certification, change history, adopted development process, events, exceptions and so on. In order to overcome this gap, this work proposes an XML-based model to represent several characteristics, of different asset types, that may be employed in the component-based development. Besides representing metadata used by consumers, useful for asset discovering, acquisition and usage, this model, called X-ARM, also focus on helping asset developers activities. Since the proposed model represents an expressive amount of information, this work also presents a tool called X-Packager, developed with the goal of helping asset description with X-ARM
Resumo:
Software Repository Mining (MSR) is a research area that analyses software repositories in order to derive relevant information for the research and practice of software engineering. The main goal of repository mining is to extract static information from repositories (e.g. code repository or change requisition system) into valuable information providing a way to support the decision making of software projects. On the other hand, another research area called Process Mining (PM) aims to find the characteristics of the underlying process of business organizations, supporting the process improvement and documentation. Recent works have been doing several analyses through MSR and PM techniques: (i) to investigate the evolution of software projects; (ii) to understand the real underlying process of a project; and (iii) create defect prediction models. However, few research works have been focusing on analyzing the contributions of software developers by means of MSR and PM techniques. In this context, this dissertation proposes the development of two empirical studies of assessment of the contribution of software developers to an open-source and a commercial project using those techniques. The contributions of developers are assessed through three different perspectives: (i) buggy commits; (ii) the size of commits; and (iii) the most important bugs. For the opensource project 12.827 commits and 8.410 bugs have been analyzed while 4.663 commits and 1.898 bugs have been analyzed for the commercial project. Our results indicate that, for the open source project, the developers classified as core developers have contributed with more buggy commits (although they have contributed with the majority of commits), more code to the project (commit size) and more important bugs solved while the results could not indicate differences with statistical significance between developer groups for the commercial project