40 resultados para Métricas de complexidade de requisitos
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The importance of non-functional requirements for computer systems is increasing. Satisfying these requirements requires special attention to the software architecture, since an unsuitable architecture introduces greater complexity in addition to the intrinsic complexity of the system. Some studies have shown that, despite requirements engineering and software architecture activities act on different aspects of development, they must be performed iteratively and intertwined to produce satisfactory software systems. The STREAM process presents a systematic approach to reduce the gap between requirements and architecture development, emphasizing the functional requirements, but using the non-functional requirements in an ad hoc way. However, non-functional requirements typically influence the system as a whole. Thus, the STREAM uses Architectural Patterns to refine the software architecture. These patterns are chosen by using non-functional requirements in an ad hoc way. This master thesis presents a process to improve STREAM in making the choice of architectural patterns systematic by using non-functional requirements, in order to guide the refinement of a software architecture
Resumo:
Nowadays, there are many aspect-oriented middleware implementations that take advantage of the modularity provided by the aspect oriented paradigm. Although the works always present an assessment of the middleware according to some quality attribute, there is not a specific set of metrics to assess them in a comprehensive way, following various quality attributes. This work aims to propose a suite of metrics for the assessment of aspect-oriented middleware systems at different development stages: design, refactoring, implementation and runtime. The work presents the metrics and how they are applied at each development stage. The suite is composed of metrics associated to static properties (modularity, maintainability, reusability, exibility, complexity, stability, and size) and dynamic properties (performance and memory consumption). Such metrics are based on existing assessment approaches of object-oriented and aspect-oriented systems. The proposed metrics are used in the context of OiL (Orb in Lua), a middleware based on CORBA and implemented in Lua, and AO-OiL, the refactoring of OIL that follows a reference architecture for aspect-oriented middleware systems. The case study performed in OiL and AO-OiL is a system for monitoring of oil wells. This work also presents the CoMeTA-Lua tool to automate the collection of coupling and size metrics in Lua source code
Resumo:
VALENTIM, R. A. M. ; SOUZA NETO, Plácido Antônio de. O impacto da utilização de design patterns nas métricas e estimativas de projetos de software: a utilização de padrões tem alguma influência nas estimativas?. Revista da FARN, Natal, v. 4, p. 63-74, 2006
Resumo:
The demographic and epidemiological transition process caused by a declining in birth rates and in mortality, also changes occurred in morbidity and mortality is represented by the increasing of the aging population and the raising of chronic diseases. These diseases are characterized by multiple etiologies, risk factors, long latency period, a prolonged evolution, non-infectious origin and it has association with functional impairment and disability. Thus, elderly with chronic non-communicable disease has priority because they belong to a vulnerable group to get affection of comorbidities in aging, with increased demand and spending on health services. This study is aimed to analyse the understanding of elderly people with chronic non comunicable disease in the medium complexity service as a contribution to the improvement of health care in the city of Natal / RN. This is a descriptive and exploratory study with a quantitative approach, carried out at the Specialized Center for Elderly Health Care and at the Pescadores Hospital. The population was composed of 4,180 persons with a sample of 124 elderly aged above 60 years, attended in these medium complexity services. The instrument, a structured form, adapted from a questionnaire for monitoring risk and protective factors for chronic disease of the Ministry of Health. To collect data was was used the interview form containing demographic data, habits, health status and health care services. The results were processed using the Statistical Package for Social Science, version 18.0, analyzed by simple statistics. It was found that most seniors were female, predominantly between 70 and 74 years old, married, with a brown skin tone and Catholic religion, more than half had incomplete basic education, family income between one to two minimum wages and living with their families. Regarding the interviewers lifestyle, 94.4%, of them ate chicken and 97.6%, fruits, it was observed a reduction in smoking, alcoholism habits and physical activity according to the increasing age, 58.1 and 18.5% had insomnia18,5 % used sleeping pills. The elderly (51.6%) reported using services in times of sickness, seeking primary care at first (30.6%), 52% did not receive referral and was looking for free demand (38.7%). The most reported morbidity was hypertension, followed by musculoskeletal disorders. Regarding the difficulties in seeking health services, the delay in treatment and the waiting line were the most cited by the elderly. Almost all of them reported no activities to promote health in these services and those who received individual counseling on chronic diseases. Almost always, the health professionals who care of them, were mostly doctors followed by nurses. Based on the results presented, it is considered that the health services of medium complexity must undergone a more continuous dialogue with other attention level and focus on actions of health promotion and prevention. It is also recommended the necessity for qualified professionals to delivery health care to elderly and the implementation of protocols by a multidisciplinary health team, intending to provide better and continous care for the elderly with chronic diseases. The healthcare professionals who served them, were mostly physicians, followed by nurses. Through the results presented, it is considered that the medium complexity healthcare services need to perform a more continuous dialogue with the other levels of attention focusing attention to the health promotion and prevention actions. It is also recommended the necessity for qualified professionals to delivery healthcare for the elderly, in addition, a protocol implementation for the multidisciplinary health care team, to provide better care, and also the care continuity to elderly with chronic diseases
Resumo:
Venous ulcers are lesions resulting from chronic venous insufficiency, venous valvular abnormalities and venous thrombosis. Its occurrence has been growing with the increase in life expectancy of the world population. Considered as fundamental aspects in the approach to the person with venous ulcer care with the interdisciplinary approach, adoption of protocol-specific knowledge, technical skill, coordination between levels of care complexity of the Health System and active participation of patients and their families, a holistic perspective. The construction of a clinical protocol for people with venous ulcers can help professionals of high complexity services in patient assessment and the establishment of quality care in a systematic way and focused on the factors that interfere with wound healing. Thus, this study aimed to analyze the evidence of validation of a clinical protocol for people with venous ulcers treated at high-complexity services. This is a methodological study with a quantitative approach, developed in three stages: literature review, evidence of content validity and evidence of validation in the clinical context. Approved by the Federal University of Rio Grande do Norte Research Ethics Committee (Opinion: 147.452 and CAAE: 07556312.0.0000.5537). The literature review was conducted in August and September 2012, becoming the basis for the construction of the protocol. Then the evidence of content validity, which included 53 judges (experts) selected by the Lattes platform to evaluate the protocol items was performed. The judges were contacted by e-mail and rated the protocol via Google Docs
Resumo:
The frequency selective surfaces, or FSS (Frequency Selective Surfaces), are structures consisting of periodic arrays of conductive elements, called patches, which are usually very thin and they are printed on dielectric layers, or by openings perforated on very thin metallic surfaces, for applications in bands of microwave and millimeter waves. These structures are often used in aircraft, missiles, satellites, radomes, antennae reflector, high gain antennas and microwave ovens, for example. The use of these structures has as main objective filter frequency bands that can be broadcast or rejection, depending on the specificity of the required application. In turn, the modern communication systems such as GSM (Global System for Mobile Communications), RFID (Radio Frequency Identification), Bluetooth, Wi-Fi and WiMAX, whose services are highly demanded by society, have required the development of antennas having, as its main features, and low cost profile, and reduced dimensions and weight. In this context, the microstrip antenna is presented as an excellent choice for communications systems today, because (in addition to meeting the requirements mentioned intrinsically) planar structures are easy to manufacture and integration with other components in microwave circuits. Consequently, the analysis and synthesis of these devices mainly, due to the high possibility of shapes, size and frequency of its elements has been carried out by full-wave models, such as the finite element method, the method of moments and finite difference time domain. However, these methods require an accurate despite great computational effort. In this context, computational intelligence (CI) has been used successfully in the design and optimization of microwave planar structures, as an auxiliary tool and very appropriate, given the complexity of the geometry of the antennas and the FSS considered. The computational intelligence is inspired by natural phenomena such as learning, perception and decision, using techniques such as artificial neural networks, fuzzy logic, fractal geometry and evolutionary computation. This work makes a study of application of computational intelligence using meta-heuristics such as genetic algorithms and swarm intelligence optimization of antennas and frequency selective surfaces. Genetic algorithms are computational search methods based on the theory of natural selection proposed by Darwin and genetics used to solve complex problems, eg, problems where the search space grows with the size of the problem. The particle swarm optimization characteristics including the use of intelligence collectively being applied to optimization problems in many areas of research. The main objective of this work is the use of computational intelligence, the analysis and synthesis of antennas and FSS. We considered the structures of a microstrip planar monopole, ring type, and a cross-dipole FSS. We developed algorithms and optimization results obtained for optimized geometries of antennas and FSS considered. To validate results were designed, constructed and measured several prototypes. The measured results showed excellent agreement with the simulated. Moreover, the results obtained in this study were compared to those simulated using a commercial software has been also observed an excellent agreement. Specifically, the efficiency of techniques used were CI evidenced by simulated and measured, aiming at optimizing the bandwidth of an antenna for wideband operation or UWB (Ultra Wideband), using a genetic algorithm and optimizing the bandwidth, by specifying the length of the air gap between two frequency selective surfaces, using an optimization algorithm particle swarm
Resumo:
Ensuring the dependability requirements is essential for the industrial applications since faults may cause failures whose consequences result in economic losses, environmental damage or hurting people. Therefore, faced from the relevance of topic, this thesis proposes a methodology for the dependability evaluation of industrial wireless networks (WirelessHART, ISA100.11a, WIA-PA) on early design phase. However, the proposal can be easily adapted to maintenance and expansion stages of network. The proposal uses graph theory and fault tree formalism to create automatically an analytical model from a given wireless industrial network topology, where the dependability can be evaluated. The evaluation metrics supported are the reliability, availability, MTTF (mean time to failure), importance measures of devices, redundancy aspects and common cause failures. It must be emphasized that the proposal is independent of any tool to evaluate quantitatively the target metrics. However, due to validation issues it was used a tool widely accepted on academy for this purpose (SHARPE). In addition, an algorithm to generate the minimal cut sets, originally applied on graph theory, was adapted to fault tree formalism to guarantee the scalability of methodology in wireless industrial network environments (< 100 devices). Finally, the proposed methodology was validate from typical scenarios found in industrial environments, as star, line, cluster and mesh topologies. It was also evaluated scenarios with common cause failures and best practices to guide the design of an industrial wireless network. For guarantee scalability requirements, it was analyzed the performance of methodology in different scenarios where the results shown the applicability of proposal for networks typically found in industrial environments
Resumo:
In academia, it is common to create didactic processors, facing practical disciplines in the area of Hardware Computer and can be used as subjects in software platforms, operating systems and compilers. Often, these processors are described without ISA standard, which requires the creation of compilers and other basic software to provide the hardware / software interface and hinder their integration with other processors and devices. Using reconfigurable devices described in a HDL language allows the creation or modification of any microarchitecture component, leading to alteration of the functional units of data path processor as well as the state machine that implements the control unit even as new needs arise. In particular, processors RISP enable modification of machine instructions, allowing entering or modifying instructions, and may even adapt to a new architecture. This work, as the object of study addressing educational soft-core processors described in VHDL, from a proposed methodology and its application on two processors with different complexity levels, shows that it s possible to tailor processors for a standard ISA without causing an increase in the level hardware complexity, ie without significant increase in chip area, while its level of performance in the application execution remains unchanged or is enhanced. The implementations also allow us to say that besides being possible to replace the architecture of a processor without changing its organization, RISP processor can switch between different instruction sets, which can be expanded to toggle between different ISAs, allowing a single processor become adaptive hybrid architecture, which can be used in embedded systems and heterogeneous multiprocessor environments
Resumo:
Visual attention is a very important task in autonomous robotics, but, because of its complexity, the processing time required is significant. We propose an architecture for feature selection using foveated images that is guided by visual attention tasks and that reduces the processing time required to perform these tasks. Our system can be applied in bottom-up or top-down visual attention. The foveated model determines which scales are to be used on the feature extraction algorithm. The system is able to discard features that are not extremely necessary for the tasks, thus, reducing the processing time. If the fovea is correctly placed, then it is possible to reduce the processing time without compromising the quality of the tasks outputs. The distance of the fovea from the object is also analyzed. If the visual system loses the tracking in top-down attention, basic strategies of fovea placement can be applied. Experiments have shown that it is possible to reduce up to 60% the processing time with this approach. To validate the method, we tested it with the feature algorithm known as Speeded Up Robust Features (SURF), one of the most efficient approaches for feature extraction. With the proposed architecture, we can accomplish real time requirements of robotics vision, mainly to be applied in autonomous robotics
Resumo:
In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method
Resumo:
Frequency Selective Surfaces (FSS) are periodic structures in one or two dimensions that act as spatial filters, can be formed by elements of type conductors patches or apertures, functioning as filters band-stop or band-pass respectively. The interest in the study of FSS has grown through the years, because such structures meet specific requirements as low-cost, reduced dimensions and weighs, beyond the possibility to integrate with other microwave circuits. The most varied applications for such structures have been investigated, as for example, radomes, antennas systems for airplanes, electromagnetic filters for reflective antennas, absorbers structures, etc. Several methods have been used for the analysis of FSS, among them, the Wave Method (WCIP). Are various shapes of elements that can be used in FSS, as for example, fractal type, which presents a relative geometric complexity. This work has as main objective to propose a simplification geometric procedure a fractal FSS, from the analysis of influence of details (gaps) of geometry of the same in behavior of the resonance frequency. Complementarily is shown a simple method to adjust the frequency resonance through analysis of a FSS, which uses a square basic cell, in which are inserted two reentrance and dimensions these reentrance are varied, making it possible to adjust the frequency. For this, the structures are analyzed numerically, using WCIP, and later are characterized experimentally comparing the results obtained. For the two cases is evaluated, the influence of electric and magnetic fields, the latter through the electric current density vector. Is realized a bibliographic study about the theme and are presented suggestions for the continuation of this work
Resumo:
The evolution of automation in recent years made possible the continuous monitoring of the processes of industrial plants. With this advance, the amount of information that automation systems are subjected to increased significantly. The alarms generated by the monitoring equipment are a major contributor to this increase, and the equipments are usually deployed in industrial plants without a formal methodology, which entails an increase in the number of alarms generated, thus overloading the alarm system and therefore the operators of such plants. In this context, the works of alarm management comes up with the objective of defining a formal methodology for installation of new equipment and detect problems in existing settings. This thesis aims to propose a set of metrics for the evaluation of alarm systems already deployed, so that you can identify the health of this system by analyzing the proposed indices and comparing them with parameters defined in the technical norms of alarm management. In addition, the metrics will track the work of alarm management, verifying if it is improving the quality of the alarm system. To validate the proposed metrics, data from actual process plants of the petrochemical industry were used
Resumo:
The increasing demand for high performance wireless communication systems has shown the inefficiency of the current model of fixed allocation of the radio spectrum. In this context, cognitive radio appears as a more efficient alternative, by providing opportunistic spectrum access, with the maximum bandwidth possible. To ensure these requirements, it is necessary that the transmitter identify opportunities for transmission and the receiver recognizes the parameters defined for the communication signal. The techniques that use cyclostationary analysis can be applied to problems in either spectrum sensing and modulation classification, even in low signal-to-noise ratio (SNR) environments. However, despite the robustness, one of the main disadvantages of cyclostationarity is the high computational cost for calculating its functions. This work proposes efficient architectures for obtaining cyclostationary features to be employed in either spectrum sensing and automatic modulation classification (AMC). In the context of spectrum sensing, a parallelized algorithm for extracting cyclostationary features of communication signals is presented. The performance of this features extractor parallelization is evaluated by speedup and parallel eficiency metrics. The architecture for spectrum sensing is analyzed for several configuration of false alarm probability, SNR levels and observation time for BPSK and QPSK modulations. In the context of AMC, the reduced alpha-profile is proposed as as a cyclostationary signature calculated for a reduced cyclic frequencies set. This signature is validated by a modulation classification architecture based on pattern matching. The architecture for AMC is investigated for correct classification rates of AM, BPSK, QPSK, MSK and FSK modulations, considering several scenarios of observation length and SNR levels. The numerical results of performance obtained in this work show the eficiency of the proposed architectures
Resumo:
There is a growing need to develop new tools to help end users in tasks related to the design, monitoring, maintenance and commissioning of critical infrastructures. The complexity of the industrial environment, for example, requires that these tools have flexible features in order to provide valuable data for the designers at the design phases. Furthermore, it is known that industrial processes have stringent requirements for dependability, since failures can cause economic losses, environmental damages and danger to people. The lack of tools that enable the evaluation of faults in critical infrastructures could mitigate these problems. Accordingly, the said work presents developing a framework for analyzing of dependability for critical infrastructures. The proposal allows the modeling of critical infrastructure, mapping its components to a Fault Tree. Then the mathematical model generated is used for dependability analysis of infrastructure, relying on the equipment and its interconnections failures. Finally, typical scenarios of industrial environments are used to validate the proposal
Resumo:
Bayesian networks are powerful tools as they represent probability distributions as graphs. They work with uncertainties of real systems. Since last decade there is a special interest in learning network structures from data. However learning the best network structure is a NP-Hard problem, so many heuristics algorithms to generate network structures from data were created. Many of these algorithms use score metrics to generate the network model. This thesis compare three of most used score metrics. The K-2 algorithm and two pattern benchmarks, ASIA and ALARM, were used to carry out the comparison. Results show that score metrics with hyperparameters that strength the tendency to select simpler network structures are better than score metrics with weaker tendency to select simpler network structures for both metrics (Heckerman-Geiger and modified MDL). Heckerman-Geiger Bayesian score metric works better than MDL with large datasets and MDL works better than Heckerman-Geiger with small datasets. The modified MDL gives similar results to Heckerman-Geiger for large datasets and close results to MDL for small datasets with stronger tendency to select simpler network structures