431 resultados para Métricas
Resumo:
The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.
Resumo:
The use of wireless sensor and actuator networks in industry has been increasing past few years, bringing multiple benefits compared to wired systems, like network flexibility and manageability. Such networks consists of a possibly large number of small and autonomous sensor and actuator devices with wireless communication capabilities. The data collected by sensors are sent directly or through intermediary nodes along the network to a base station called sink node. The data routing in this environment is an essential matter since it is strictly bounded to the energy efficiency, thus the network lifetime. This work investigates the application of a routing technique based on Reinforcement Learning s Q-Learning algorithm to a wireless sensor network by using an NS-2 simulated environment. Several metrics like energy consumption, data packet delivery rates and delays are used to validate de proposal comparing it with another solutions existing in the literature
Resumo:
This paper analyzes the performance of a parallel implementation of Coupled Simulated Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel processing is an efficient form of information processing with emphasis on exploration of simultaneous events in the execution of software. It arises primarily due to high computational performance demands, and the difficulty in increasing the speed of a single processing core. Despite multicore processors being easily found nowadays, several algorithms are not yet suitable for running on parallel architectures. The algorithm is characterized by a group of Simulated Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs on a single thread executed by different processors. In the analysis of parallel performance and scalability, these metrics were investigated: the execution time; the speedup of the algorithm with respect to increasing the number of processors; and the efficient use of processing elements with respect to the increasing size of the treated problem. Furthermore, the quality of the final solution was verified. For the study, this paper proposes a parallel version of CSA and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions. For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained are shown and discussed observing the analysis of the metrics. The conclusions of the paper characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the parallel scalability and parallel efficiency
Resumo:
The increasing demand for high performance wireless communication systems has shown the inefficiency of the current model of fixed allocation of the radio spectrum. In this context, cognitive radio appears as a more efficient alternative, by providing opportunistic spectrum access, with the maximum bandwidth possible. To ensure these requirements, it is necessary that the transmitter identify opportunities for transmission and the receiver recognizes the parameters defined for the communication signal. The techniques that use cyclostationary analysis can be applied to problems in either spectrum sensing and modulation classification, even in low signal-to-noise ratio (SNR) environments. However, despite the robustness, one of the main disadvantages of cyclostationarity is the high computational cost for calculating its functions. This work proposes efficient architectures for obtaining cyclostationary features to be employed in either spectrum sensing and automatic modulation classification (AMC). In the context of spectrum sensing, a parallelized algorithm for extracting cyclostationary features of communication signals is presented. The performance of this features extractor parallelization is evaluated by speedup and parallel eficiency metrics. The architecture for spectrum sensing is analyzed for several configuration of false alarm probability, SNR levels and observation time for BPSK and QPSK modulations. In the context of AMC, the reduced alpha-profile is proposed as as a cyclostationary signature calculated for a reduced cyclic frequencies set. This signature is validated by a modulation classification architecture based on pattern matching. The architecture for AMC is investigated for correct classification rates of AM, BPSK, QPSK, MSK and FSK modulations, considering several scenarios of observation length and SNR levels. The numerical results of performance obtained in this work show the eficiency of the proposed architectures
Resumo:
The manufacture of prostheses for lower limb amputees (transfemural and transtibial) requires the preparation of a cartridge with appropriate and custom fit to the profile of each patient. The traditional process to the patients, mainly in public hospitals in Brazil, begins with the completion of a form where types of equipment, plugins, measures, levels of amputation etc. are identified. Currently, such work is carried out manually using a common metric tape and caliper of wood to take the measures of the stump, featuring a very rudimentary, and with a high degree of uncertainty geometry of the final product. To address this problem, it was necessary to act in two simultaneously and correlated directions. Originally, it was developed an integrated tool for viewing 3D CAD for transfemoral types of prostheses and transtibial called OrtoCAD I. At the same time, it was necessary to design and build a reader Mechanical equipment (sort of three-dimensional scanner simplified) able to obtain, automatically and with accuracy, the geometric information of either of the stump or the healthy leg. The methodology includes the application of concepts of reverse engineering to computationally generate the representation of the stump and/or the reverse image of the healthy member. The materials used in the manufacturing of prostheses nor always obey to a technical scientific criteria, because, if by one way it meets the criteria of resistance, by the other, it brings serious problems mainly due to excess of weight. This causes to the user various disorders due to lack of conformity. That problem was addressed with the creation of a hybrid composite material for the manufacture of cartridges of prostheses. Using the Reader Fitter and OrtoCAD, the new composite material, which aggregates the mechanical properties of strength and rigidity on important parameters such as low weight and low cost, it can be defined in its better way. Besides, it brings a reduction of up steps in the current processes of manufacturing or even the feasibility of using new processes, in the industries, in order to obtain the prostheses. In this sense, the hybridization of the composite with the combination of natural and synthetic fibers can be a viable solution to the challenges offered above
Resumo:
Due to advances in the manufacturing process of orthopedic prostheses, the need for better quality shape reading techniques (i.e. with less uncertainty) of the residual limb of amputees became a challenge. To overcome these problems means to be able in obtaining accurate geometry information of the limb and, consequently, better manufacturing processes of both transfemural and transtibial prosthetic sockets. The key point for this task is to customize these readings trying to be as faithful as possible to the real profile of each patient. Within this context, firstly two prototype versions (α and β) of a 3D mechanical scanner for reading residual limbs shape based on reverse engineering techniques were designed. Prototype β is an improved version of prototype α, despite remaining working in analogical mode. Both prototypes are capable of producing a CAD representation of the limb via appropriated graphical sheets and were conceived to work purely by mechanical means. The first results were encouraging as they were able to achieve a great decrease concerning the degree of uncertainty of measurements when compared to traditional methods that are very inaccurate and outdated. For instance, it's not unusual to see these archaic methods in action by making use of ordinary home kind measure-tapes for exploring the limb's shape. Although prototype β improved the readings, it still required someone to input the plotted points (i.e. those marked in disk shape graphical sheets) to an academic CAD software called OrtoCAD. This task is performed by manual typing which is time consuming and carries very limited reliability. Furthermore, the number of coordinates obtained from the purely mechanical system is limited to sub-divisions of the graphical sheet (it records a point every 10 degrees with a resolution of one millimeter). These drawbacks were overcome by designing the second release of prototype β in which it was developed an electronic variation of the reading table components now capable of performing an automatic reading (i.e. no human intervention in digital mode). An interface software (i.e. drive) was built to facilitate data transfer. Much better results were obtained meaning less degree of uncertainty (it records a point every 2 degrees with a resolution of 1/10 mm). Additionally, it was proposed an algorithm to convert the CAD geometry, used by OrtoCAD, to an appropriate format and enabling the use of rapid prototyping equipment aiming future automation of the manufacturing process of prosthetic sockets.
Resumo:
The pegmatite rocks in Rio Grande do Norte are responsible for much of the production of industrial minerals like quartz and feldspar. Quartz and feldspar are minerals from pegmatite which may occur in pockets with metric to centimetric dimensions or as millimetric to sub millimetric intergrowths. The correct physical liberation of the mineral of interest, in case of intergrowths, requires an appropriate particle size, acquired by size reduction operations. The method for treating mineral which has a high efficiency fines particles recovery is flotation. The main purpose of the present study is to evaluate the recovery of quartz and potassium feldspar using cationic diamine and quaternary ammonium salt as collectors by means of dissolved air flotation DAF. The tests were performed based on a central composite design 24, by which the influence of process variables was statistically verified: concentration of the quaternary ammonium salt and diamine collectors, pH and conditioning time. The efficiency of flotation was calculated from the removal of turbidity of the solution. Results of maximum flotation efficiency (60%) were found in the level curves, plotted in conditions of low concentrations of collectors (1,0 x 10-5 mol.L-1). These high flotation efficiencies were obtained when operating at pH 4 to 8 with conditioning time ranging from 3 to 5 minutes. Thus, the results showed that the process variables have played important roles in the dissolved air flotation process concerning the flotability of the minerals.
Resumo:
Macrófitas são importantes produtoras primárias do ecossistema aquático, e o desequilíbrio do ambiente pode ocasionar seu crescimento acelerado. Portanto, levantamentos de dados relacionados a macrófitas submersas são importantes para contribuir na gestão de corpos de água. Contudo, a amostragem dessa vegetação requer um enorme esforço físico. Nesse sentido, a técnica hidroacústica é apropriada para o estudo de macrófitas submersas. Assim, os objetivos deste trabalho foram avaliar os tipos de dados gerados pelo ecobatímetro e analisar como esses dados caracterizam a vegetação. Utilizou-se o ecobatímetro BioSonics DT-X acoplado a um GPS. A área de estudo é um trecho do Rio Uberaba, MG. A amostragem foi feita por meio de transectos, navegando de uma margem à outra. Depois de processar os dados, obteve-se informação a respeito de ocorrência de macrófitas submersas, profundidade, altura média das plantas, porcentagem da cobertura vegetal e posição. A partir desse conjunto de dados, foi possível extrair outras duas métricas: biovolume e altura efetiva do dossel. Os dados foram importados de um Sistema de Informação Geográfica e geraram-se mapas ilustrativos das variáveis estudadas. Além disso, quatro perfis foram selecionados para analisar a diferença entre as grandezas de representação de macrófitas. O ecobatímetro mostrou-se uma ferramenta eficaz no mapeamento de macrófitas submersas. Cada uma das medidas - altura do dossel, ECH ou biovolume - caracteriza de forma diferente a vegetação submersa. Dessa forma, a escolha do tipo de representação depende da aplicação desejada.
Resumo:
In this thesis we investigate physical problems which present a high degree of complexity using tools and models of Statistical Mechanics. We give a special attention to systems with long-range interactions, such as one-dimensional long-range bondpercolation, complex networks without metric and vehicular traffic. The flux in linear chain (percolation) with bond between first neighbor only happens if pc = 1, but when we consider long-range interactions , the situation is completely different, i.e., the transitions between the percolating phase and non-percolating phase happens for pc < 1. This kind of transition happens even when the system is diluted ( dilution of sites ). Some of these effects are investigated in this work, for example, the extensivity of the system, the relation between critical properties and the dilution, etc. In particular we show that the dilution does not change the universality of the system. In another work, we analyze the implications of using a power law quality distribution for vertices in the growth dynamics of a network studied by Bianconi and Barabási. It incorporates in the preferential attachment the different ability (fitness) of the nodes to compete for links. Finally, we study the vehicular traffic on road networks when it is submitted to an increasing flux of cars. In this way, we develop two models which enable the analysis of the total flux on each road as well as the flux leaving the system and the behavior of the total number of congested roads
Resumo:
In this dissertation we present some generalizations for the concept of distance by using more general value spaces, such as: fuzzy metrics, probabilistic metrics and generalized metrics. We show how such generalizations may be useful due to the possibility that the distance between two objects could carry more information about the objects than in the case where the distance is represented just by a real number. Also in this thesis we propose another generalization of distance which encompasses the notion of interval metric and generates a topology in a natural way. Several properties of this generalization are investigated, and its links with other existing generalizations
Resumo:
Nowadays, the importance of using software processes is already consolidated and is considered fundamental to the success of software development projects. Large and medium software projects demand the definition and continuous improvement of software processes in order to promote the productive development of high-quality software. Customizing and evolving existing software processes to address the variety of scenarios, technologies, culture and scale is a recurrent challenge required by the software industry. It involves the adaptation of software process models for the reality of their projects. Besides, it must also promote the reuse of past experiences in the definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. This work aimed to explore the use and adaptation of consolidated software product lines techniques to promote the management of the variabilities of software process families. In order to achieve this aim: (i) a systematic literature review is conducted to identify and characterize variability management approaches for software processes; (ii) an annotative approach for the variability management of software process lines is proposed and developed; and finally (iii) empirical studies and a controlled experiment assess and compare the proposed annotative approach against a compositional one. One study a comparative qualitative study analyzed the annotative and compositional approaches from different perspectives, such as: modularity, traceability, error detection, granularity, uniformity, adoption, and systematic variability management. Another study a comparative quantitative study has considered internal attributes of the specification of software process lines, such as modularity, size and complexity. Finally, the last study a controlled experiment evaluated the effort to use and the understandability of the investigated approaches when modeling and evolving specifications of software process lines. The studies bring evidences of several benefits of the annotative approach, and the potential of integration with the compositional approach, to assist the variability management of software process lines
Resumo:
Over the years the use of application frameworks designed for the View and Controller layers of MVC architectural pattern adapted to web applications has become very popular. These frameworks are classified into Actions Oriented and Components Oriented , according to the solution strategy adopted by the tools. The choice of such strategy leads the system architecture design to acquire non-functional characteristics caused by the way the framework influences the developer to implement the system. The components reusability is one of those characteristics and plays a very important role for development activities such as system evolution and maintenance. The work of this dissertation consists to analyze of how the reusability could be influenced by the Web frameworks usage. To accomplish this, small academic management applications were developed using the latest versions of Apache Struts and JavaServer Faces frameworks, the main representatives of Java plataform Web frameworks of. For this assessment was used a software quality model that associates internal attributes, which can be measured objectively, to the characteristics in question. These attributes and metrics defined for the model were based on some work related discussed in the document
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Resumo:
Ubiquitous computing systems operate in environments where the available resources significantly change during the system operation, thus requiring adaptive and context aware mechanisms to sense changes in the environment and adapt to new execution contexts. Motivated by this requirement, a framework for developing and executing adaptive context aware applications is proposed. The PACCA framework employs aspect-oriented techniques to modularize the adaptive behavior and to keep apart the application logic from this behavior. PACCA uses abstract aspect concept to provide flexibility by addition of new adaptive concerns that extend the abstract aspect. Furthermore, PACCA has a default aspect model that considers habitual adaptive concerns in ubiquitous applications. It exploits the synergy between aspect-orientation and dynamic composition to achieve context-aware adaptation, guided by predefined policies and aim to allow software modules on demand load making possible better use of mobile devices and yours limited resources. A Development Process for the ubiquitous applications conception is also proposed and presents a set of activities that guide adaptive context-aware developer. Finally, a quantitative study evaluates the approach based on aspects and dynamic composition for the construction of ubiquitous applications based in metrics
Resumo:
The use of middleware technology in various types of systems, in order to abstract low-level details related to the distribution of application logic, is increasingly common. Among several systems that can be benefited from using these components, we highlight the distributed systems, where it is necessary to allow communications between software components located on different physical machines. An important issue related to the communication between distributed components is the provision of mechanisms for managing the quality of service. This work presents a metamodel for modeling middlewares based on components in order to provide to an application the abstraction of a communication between components involved in a data stream, regardless their location. Another feature of the metamodel is the possibility of self-adaptation related to the communication mechanism, either by updating the values of its configuration parameters, or by its replacement by another mechanism, in case of the restrictions of quality of service specified are not being guaranteed. In this respect, it is planned the monitoring of the communication state (application of techniques like feedback control loop), analyzing performance metrics related. The paradigm of Model Driven Development was used to generate the implementation of a middleware that will serve as proof of concept of the metamodel, and the configuration and reconfiguration policies related to the dynamic adaptation processes. In this sense was defined the metamodel associated to the process of a communication configuration. The MDD application also corresponds to the definition of the following transformations: the architectural model of the middleware in Java code, and the configuration model to XML