25 resultados para model base
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
This paper proposes a systematic approach to management of variability modelsdriven and aspects using the mechanisms of approaches Aspect-Oriented Software Development (AOSD) and Model-Driven Development (MDD). The main goal of the approach, named CrossMDA-SPL, is to improve the management(gerência), modularization and isolation ou separation of the variability of the LPSs of architecture in a high level of abstraction (model) at the design and implementing phases of development Software Product Lines (SPLs), exploiting the synergy between AOSD and MDD. The CrossMDA-SPL approach defines some artifacts basis for advance the separation clear in between the mandatory (bounden) and optional features in the architecture of SPL. The artifacts are represented by two models named: (i) core model (base domain) - responsible for specify the common features the all members of the SPL, and (ii) variability model - responsible for represent the variables features of SPL. In addition, the CrossMDA-SPL approach is composed of: (i) guidelines for modeling and representation of variability, (ii) CrossMDA-SPL services and process, and (iii) models of the architecture of SPL or product instance of SPL. The guidelines use the advantages of AOSD and MDD to promote a better modularization of the variable features of the architecture of SPL during the creation of core and variability models of the approach. The services and sub-processes are responsible for combination automatically, through of process of transformation between the core and variability models, and the generation of new models that represent the implementation of the architecture of SPL or a instance model of SPL. Mechanisms for effective modularization of variability for architectures of SPL at model level. The concepts are described and measured with the execution of a case study of an SPL for management systems of transport electronic tickets
Resumo:
Aspect-Oriented Software Development (AOSD) is a technique that complements the Object- Oriented Software Development (OOSD) modularizing several concepts that OOSD approaches do not modularize appropriately. However, the current state-of-the art on AOSD suffers with software evolution, mainly because aspect definition can stop to work correctly when base elements evolve. A promising approach to deal with that problem is the definition of model-based pointcuts, where pointcuts are defined based on a conceptual model. That strategy makes pointcut less prone to software evolution than model-base elements. Based on that strategy, this work defines a conceptual model at high abstraction level where we can specify software patterns and architectures that through Model Driven Development techniques they can be instantiated and composed in architecture description language that allows aspect modeling at architecture level. Our MDD approach allows propagate concepts in architecture level to another abstraction levels (design level, for example) through MDA transformation rules. Also, this work shows a plug-in implemented to Eclipse platform called AOADLwithCM. That plug-in was created to support our development process. The AOADLwithCM plug-in was used to describe a case study based on MobileMedia System. MobileMedia case study shows step-by-step how the Conceptual Model approach could minimize Pointcut Fragile Problems, due to software evolution. MobileMedia case study was used as input to analyses evolutions on software according to software metrics proposed by KHATCHADOURIAN, GREENWOOD and RASHID. Also, we analyze how evolution in base model could affect maintenance on aspectual model with and without Conceptual Model approaches
Resumo:
The aim of this work was to perform the extraction and characterization of xylan from corn cobs and prepare xylan-based microcapsules. For that purpose, an alkaline extraction of xylan was carried out followed by the polymer characterization regarding its technological properties, such as angle of repose, Hausner factor, density, compressibility and compactability. Also, a low-cost and rapid analytical procedure to identify xylan by means of infrared spectroscopy was studied. Xylan was characterized as a yellowish fine powder with low density and poor flow properties. After the extraction and characterization of the polymer, xylan-based microcapsules were prepared by means of interfacial crosslinking polymerization and their characterization was performed in order to obtain gastroresistant multiparticulate systems. This work involved the most suitable parameters of the preparation of microcapsules as well as the study of the process, scale-up methodology and biological analysis. Magnetic nanoparticles were used as a model system to be encapsulated by the xylan microcapsules. According to the results, xylan-based microcapsules were shown to be resistant to several conditions found along the gastrointestinal tract and they were able to avoid the early degradation of the magnetic nanoparticles
Resumo:
Hormone therapy is an important tool in the treatment of breast cancer and tamoxifen represents one of the most important drugs used in this type of treatment. Recently other drugs based on the inhibition of aromatase had been developed, this enzyme is responsible for the synthesis of estrogenic esteroids from the androgenic ones. The objective of this study would be the development of a quantitative cytological model of murine estral analysis that allowed the characterization of different hormone drugs effect over vaginal epithelium. The technique of monochromatic staining with Evans blue (C.I. 23860) showed to be efficient in the qualitative and quantitative classification of the cycle. It had been observed differences in the cytological standard of animals submitted to the studied drugs; tamoxifen presented a widening of phases of lesser maturation (diestrais), while anastrozole and exemestane increased the duration of the phases of larger maturation (estrais). The data were analysed through a cubical non linear regression (spline) which allowed a better characterization of the drugs, suggesting a proper cytological profile to the antagonism of the estrogen receptor (tamoxifen), aromatase competition (anastrozole) and inhibition of the enzyme (exemestane)
Resumo:
Last century Six Sigma Strategy has been the focus of study for many scientists, between the discoveries we have the importance of data process for the free of error product manufactory. So, this work focuses on data quality importance in an enterprise. For this, a descriptive-exploratory study of seventeen pharmacies of manipulations from Rio Grande do Norte was undertaken with the objective to be able to create a base structure model to classify enterprises according to their data bases. Therefore, statistical methods such as cluster and discriminant analyses were used applied to a questionnaire built for this specific study. Data collection identified four group showing strong and weak characteristics for each group and that are differentiated from each other
Resumo:
The electronic mail service is one of the most Internet services that grow in the corporate environment. This evolution is bringing several problems for the organizations, especially to information that circulates inside of the corporate net. The lack of correct orientation to the people, about the usage and the security importance of these resources, is leaving breaches and causing misusage and overuse of service, for example. In recent literature, it starts to coming out several ideas, which has helped to rganizations how to plain and how to implement the information security system to the electronic mail in computer environment. However, these ideas are still not placed in practice in many companies, public or private. This dissertation tries to demonstrate the results of a research that has like goal, identify the importance that user training has over the information security policy, through a case study inside of private superior education institute in this state. Besides, this work had by basic orientation the ISO/IEC 17799, which talk about People Security. This study was developed over a proposed model to this research, which looked for offer conditions to guide the institution studied, how to plan better a information security policy to the electronic mail. Also, this research has an exploratory and descreptive nature and your type, qualitative. Firstly, it was applied na questionary to the information technology manager, as better way to get some general data and to deepen the contact which still then, it was being kept through e-mail. Thereupon this first contact, eleven interviews were done with the same manager, beside one interview with twenty-four users, among employees e students. After that to collect and transcript the interviews, were review with the manager all informations given, to correct any mistakes and to update that informations, to then, start the data analyze. The research suggests that the institution has a pro attitude about the information security policy and the electronic mail usage. However, it was clear that answers have their perception about information security under a very inexperient way, derived of a planning lack in relation to training program capable to solve the problem
Resumo:
Most algorithms for state estimation based on the classical model are just adequate for use in transmission networks. Few algorithms were developed specifically for distribution systems, probably because of the little amount of data available in real time. Most overhead feeders possess just current and voltage measurements at the middle voltage bus-bar at the substation. In this way, classical algorithms are of difficult implementation, even considering off-line acquired data as pseudo-measurements. However, the necessity of automating the operation of distribution networks, mainly in regard to the selectivity of protection systems, as well to implement possibilities of load transfer maneuvers, is changing the network planning policy. In this way, some equipments incorporating telemetry and command modules have been installed in order to improve operational features, and so increasing the amount of measurement data available in real-time in the System Operation Center (SOC). This encourages the development of a state estimator model, involving real-time information and pseudo-measurements of loads, that are built from typical power factors and utilization factors (demand factors) of distribution transformers. This work reports about the development of a new state estimation method, specific for radial distribution systems. The main algorithm of the method is based on the power summation load flow. The estimation is carried out piecewise, section by section of the feeder, going from the substation to the terminal nodes. For each section, a measurement model is built, resulting in a nonlinear overdetermined equations set, whose solution is achieved by the Gaussian normal equation. The estimated variables of a section are used as pseudo-measurements for the next section. In general, a measurement set for a generic section consists of pseudo-measurements of power flows and nodal voltages obtained from the previous section or measurements in real-time, if they exist -, besides pseudomeasurements of injected powers for the power summations, whose functions are the load flow equations, assuming that the network can be represented by its single-phase equivalent. The great advantage of the algorithm is its simplicity and low computational effort. Moreover, the algorithm is very efficient, in regard to the accuracy of the estimated values. Besides the power summation state estimator, this work shows how other algorithms could be adapted to provide state estimation of middle voltage substations and networks, namely Schweppes method and an algorithm based on current proportionality, that is usually adopted for network planning tasks. Both estimators were implemented not only as alternatives for the proposed method, but also looking for getting results that give support for its validation. Once in most cases no power measurement is performed at beginning of the feeder and this is required for implementing the power summation estimations method, a new algorithm for estimating the network variables at the middle voltage bus-bar was also developed
Resumo:
The present work describes the use of a mathematical tool to solve problems arising from control theory, including the identification, analysis of the phase portrait and stability, as well as the temporal evolution of the plant s current induction motor. The system identification is an area of mathematical modeling that has as its objective the study of techniques which can determine a dynamic model in representing a real system. The tool used in the identification and analysis of nonlinear dynamical system is the Radial Basis Function (RBF). The process or plant that is used has a mathematical model unknown, but belongs to a particular class that contains an internal dynamics that can be modeled.Will be presented as contributions to the analysis of asymptotic stability of the RBF. The identification using radial basis function is demonstrated through computer simulations from a real data set obtained from the plant
Resumo:
An alternative nonlinear technique for decoupling and control is presented. This technique is based on a RBF (Radial Basis Functions) neural network and it is applied to the synchronous generator model. The synchronous generator is a coupled system, in other words, a change at one input variable of the system, changes more than one output. The RBF network will perform the decoupling, separating the control of the following outputs variables: the load angle and flux linkage in the field winding. This technique does not require knowledge of the system parameters and, due the nature of radial basis functions, it shows itself stable to parametric uncertainties, disturbances and simpler when it is applied in control. The RBF decoupler is designed in this work for decouple a nonlinear MIMO system with two inputs and two outputs. The weights between hidden and output layer are modified online, using an adaptive law in real time. The adaptive law is developed by Lyapunov s Method. A decoupling adaptive controller uses the errors between system outputs and model outputs, and filtered outputs of the system to produce control signals. The RBF network forces each outputs of generator to behave like reference model. When the RBF approaches adequately control signals, the system decoupling is achieved. A mathematical proof and analysis are showed. Simulations are presented to show the performance and robustness of the RBF network
Resumo:
Ceramics with porous cellular structure, called ceramic foams, have a potential use in several applications, such as: thermal insulation, catalyst supports, filters, and others. Among these techniques to obtain porous ceramics the replication method is an important process. This method consists of impregnation of a sponge (usually polymer) with ceramic slurry, followed by a heat treatment, which will happen the decomposition of organic material and sintering the ceramic material, resulting in a ceramic structure which is a replica of impregnated sponge. Knowledge of the mechanical properties of these ceramics is important for these materials can be used commercially. Gibson and Ashby developed a mathematical model to describe the mechanical behavior of cellular solids. This model wasn´t for describing the ceramics behavior produced by the replica method, because it doesn´t consider the defects from this type of processing. In this study were researched mechanical behavior of porous alumina ceramics obtained by the replica method and proposed modifications to the model of Gibson and Ashby to accommodate this material. The polymer sponge used in processing was characterized by thermogravimetric analysis and scanning electron microscopy. The materials obtained after sintering were characterized by mechanical strength tests on 4-point bending and compression, density and porosity and by scanning electron microscopy. From these results it was evaluated the mechanical strength behavior compared to Gibson and Ashby model for solid cellular structure and was proposed a correction of this model through a factor related to struts integrity degree, which consider fissures present in the structure of these materials besides defects geometry within the struts
Resumo:
The determination of the rheology of drilling fluids is of fundamental importance to select the best composition and the best treatment to be applied in these fluids. This work presents a study of the rheological behavior of some addictives used as viscosifiers in water-based drilling fluids. The evaluated addictives were: Carboxymethylcellulose (CMC), Xanthan gum (GX), and Bentonite. The main objective was to rheologically characterize suspensions composed by these addictives, by applying mathematical models for fluid flow behavior, in order to determine the best flow equation to represent the system, as well as the model parameters. The mathematical models applied in this research were: the Bingham Model, the Ostwald de Wale Model, and the Herschel-Bulkley Model. A previous study of hydration time for each used addictive was accomplished seeking to evaluate the effect of polymer and clay hydration on rheological behavior of the fluid. The rheological characterization was made through typical rheology experiments, using a coaxial cylinder viscosimeter, where the flow curves and the thixotropic magnitude of each fluid was obtained. For each used addictive the rheological behavior as a function of temperature was also evaluated as well as fluid stability as a function of the concentration and kind of addictive used. After analyses of results, mixtures of polymer and clay were made seeking to evaluate the rheological modifications provided by the polymer incorporation in the water + bentonite system. The obtained results showed that the Ostwald de Waale model provided the best fit for fluids prepared using CMC and for fluids with Xanthan gum and Bentonite the best fit was given by the Herschel-Bulkley one
Resumo:
Actually, surveys have been developed for obtaining new materials and methodologies that aim to minimize environmental problems due to discharges of industrial effluents contaminated with heavy metals. The adsorption has been used as an alternative technology effectively, economically viable and potentially important for the reduction of metals, especially when using natural adsorbents such as certain types of clay. Chitosan, a polymer of natural origin, present in the shells of crustaceans and insects, has also been used for this purpose. Among the clays, vermiculite is distinguished by its good ion exchange capacity and in its expanded form enhances its properties by greatly increasing its specific surface. This study aimed to evaluate the functionality of the hybrid material obtained through the modification of expanded vermiculite with chitosan in the removal of lead ions (II) in aqueous solution. The material was characterized by infrared spectroscopy (IR) in order to evaluate the efficiency of modification of matrix, the vermiculite, the organic material, chitosan. The thermal stability of the material and the ratio clay / polymer was evaluated by thermogravimetry. To evaluate the surface of the material was used in scanning electron microscopy (SEM) and (BET). The BET analysis revealed a significant increase in surface area of vermiculite that after interaction with chitosan, was obtained a value of 21, 6156 m2 / g. Adsorption tests were performed according to the particle size, concentration and time. The results show that the capacity of removal of ions through the vermiculite was on average 88.4% for lead in concentrations ranging from 20-200 mg / L and 64.2% in the concentration range of 1000 mg / L. Regarding the particle size, there was an increase in adsorption with decreasing particle size. In fuction to the time of contact, was observed adsorption equilibrium in 60 minutes with adsorption capacity. The data of the isotherms were fitted to equation Freundlich. The kinetic study of adsorption showed that the pseudo second- order model best describes the adsorption adsorption, having been found following values K2=0,024 g. mg-1 min-1and Qmax=25,75 mg/g, value very close to the calculated Qe = 26.31 mg / g. From the results we can conclude that the material can be used in wastewater treatment systems as a source of metal ions adsorbent due to its high adsorption capacity
Resumo:
Polyurethanes are very versatile macromolecular materials that can be used in the form of powders, adhesives and elastomers. As a consequence, they constitute important subject for research as well as outstanding materials used in several manufacturing processes. In addition to the search for new polyurethanes, the kinetics control during its preparation is a very important topic, mainly if the polyurethane is obtained via bulk polymerization. The work in thesis was directed towards this subject, particularly the synthesis of polyurethanes based castor oil and isophorone diisocianate. As a first step castor oil characterized using the following analytical methods: iodine index, saponification index, refraction index, humidity content and infrared absorption spectroscopy (FTIR). As a second step, test specimens of these polyurethanes were obtained via bulk polymerization and were submitted to swelling experiments with different solvents. From these experiments, the Hildebrand parameter was determined for this material. Finally, bulk polymerization was carried out in a differential scanning calorimetry (DSC) equipment, using different heating rates, at two conditions: without catalyst and with dibutyltin dilaurate (DBTDL) as catalyst. The DSC curves were adjusted to a kinetic model, using the isoconversional method, indicating the autocatalytic effect characteristic of this class of polymerization reaction
Resumo:
Seeking a greater appreciation of cheese whey was developed to process the hydrogenation of lactose for the production of lactitol, a polyol with high added value, using the catalyst Ni / activated carbon (15% and 20% nickel), the nitride Mo2N, the bimetallic carbide Ni-Mo/ activated carbon and carbide Mo2C. After synthesis, the prepared catalysts were analyzed by MEV, XRD, laser granulometry and B.E.T. The reactor used in catalytic hydrogenation of lactose was the type of bed mud with a pressure (68 atm), temperature (120 oC) and stirring speed (500 rpm) remained constant during the experiments. The system operated in batch mode for the solid and liquid and semi-continuous to gas. Besides the nature of the catalyst, we studied the influence of pH of reaction medium for Mo2C carbide as well as evaluating the character of the protein inhibitor and chloride ions on the activity of catalysts Ni (20%)/Activated Carbon and bimetallic carbide Ni-Mo/Activated Carbon. The decrease in protein levels was performed by coagulation with chitosan and adsorption of chloride ions was performed by ion exchange resins. In the process of protein adsorption and chloride ions, the maximum percentage extracted was about 74% and 79% respectively. The micrographs of the powders of Mo2C and Mo2N presented in the form of homogeneous clusters, whereas for the catalysts supported on activated carbon, microporous structure proved impregnated with small particles indicating the presence of metal. The results showed high conversion of lactose to lactitol 90% for the catalyst Ni (20%)/Activated Carbon at pH 6 and 46% for the carbide Mo2C pH 8 (after addition of NH4OH) using the commercial lactose. Monitoring the evolution of the constituents present in the reaction medium was made by liquid chromatography. A kinetic model of heterogeneous Langmuir Hinshelwood type was developed which showed that the estimated constants based catalysts promoted carbide and nitride with a certain speed the adsorption, desorption and production of lactitol
Resumo:
Java Card technology allows the development and execution of small applications embedded in smart cards. A Java Card application is composed of an external card client and of an application in the card that implements the services available to the client by means of an Application Programming Interface (API). Usually, these applications manipulate and store important information, such as cash and confidential data of their owners. Thus, it is necessary to adopt rigor on developing a smart card application to improve its quality and trustworthiness. The use of formal methods on the development of these applications is a way to reach these quality requirements. The B method is one of the many formal methods for system specification. The development in B starts with the functional specification of the system, continues with the application of some optional refinements to the specification and, from the last level of refinement, it is possible to generate code for some programming language. The B formalism has a good tool support and its application to Java Card is adequate since the specification and development of APIs is one of the major applications of B. The BSmart method proposed here aims to promote the rigorous development of Java Card applications up to the generation of its code, based on the refinement of its formal specification described in the B notation. This development is supported by the BSmart tool, that is composed of some programs that automate each stage of the method; and by a library of B modules and Java Card classes that model primitive types, essential Java Card API classes and reusable data structures