15 resultados para Contracts of execution

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When a company desires to invest in a project, it must obtain resources needed to make the investment. The alternatives are using firm s internal resources or obtain external resources through contracts of debt and issuance of shares. Decisions involving the composition of internal resources, debt and shares in the total resources used to finance the activities of a company related to the choice of its capital structure. Although there are studies in the area of finance on the debt determinants of firms, the issue of capital structure is still controversial. This work sought to identify the predominant factors that determine the capital structure of Brazilian share capital, non-financial firms. This work was used a quantitative approach, with application of the statistical technique of multiple linear regression on data in panel. Estimates were made by the method of ordinary least squares with model of fixed effects. About 116 companies were selected to participate in this research. The period considered is from 2003 to 2007. The variables and hypotheses tested in this study were built based on theories of capital structure and in empirical researches. Results indicate that the variables, such as risk, size, and composition of assets and firms growth influence their indebtedness. The profitability variable was not relevant to the composition of indebtedness of the companies analyzed. However, analyzing only the long-term debt, comes to the conclusion that the relevant variables are the size of firms and, especially, the composition of its assets (tangibility).This sense, the smaller the size of the undertaking or the greater the representation of fixed assets in total assets, the greater its propensity to long-term debt. Furthermore, this research could not identify a predominant theory to explain the capital structure of Brazilian

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The apparent virtuosity that if could wait of the globalization and the neoliberalism has given signals of deterioration in the contractual relations, especially in contracts of mass consumption, generating innumerable offensive situations to the basic rights and the goods constitutionally protected of the contractors. In the world of today, still that it does not reveal any desire, the individual practically is compelled to contract, for force of necessities and customs completely imposed, mainly in face of the essentiality of the services or agreed to goods. Ahead of as much and unexpected changes in the civil liames and of consumption, dictated for the globalization, it comes to surface the reflection if the private law e, more specifically, the civil law, meet prepared adequately to deal with these new parameters of the economy. The present dissertation has the intention to investigate if the globalization and the consequent neoliberalism, in this beginning of third millennium, will imply to revive of the principles and the basics paradigms of the contracts that consolidated and had kept, for more than two centuries, the liberal State. One notices that the study of this phenomenon it gains importance to the measure where if it aggravates the decline of the social State (Welfare State), with the embrittlement and the loss of the autonomy of the state authority, over all in countries of delayed modernity, as it is the case of Brazil, that presents deep deficiencies to give or to promote, with a minimum of quality and efficiency, essential considered public services to the collective and that if they find consecrated in the Federal Constitution, as basic rights or as goods constitutionally protecting, the example of the health, the education, the housing, the security, the providence, the insurance, the protection the maternity, the infancy and of aged and deficient. To the end, the incidence of constant basic rights of the man in the Constitution is concluded that, in the process of interpretation of the right contractual conflicts that have as object rights or goods constitutionally proteges, in the universe of the globalized perhaps economy and of the neoliberalismo, it consists in one of the few ways - unless the only one - that still they remain to over all deal with more adequately the contractual relations, exactly that if considers the presence of clauses generalities in the scope of the legislation infraconstitutional civil and of consumption, front the private detainers of social-economic power. To be able that it matters necessarily in disequilibrium between the parts, whose realignment depends on the effect and the graduation that if it intends to confer to the basic right in game in the private relation. The Constitution, when allowing the entailing of the basic rights in the privates relations, would be assuming contours of a statute basic of all the collective, giving protection to the man against the power, if public or independently private

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Particle Swarm Optimization is a metaheuristic that arose in order to simulate the behavior of a number of birds in flight, with its random movement locally, but globally determined. This technique has been widely used to address non-liner continuous problems and yet little explored in discrete problems. This paper presents the operation of this metaheuristic, and propose strategies for implementation of optimization discret problems as form of execution parallel as sequential. The computational experiments were performed to instances of the TSP, selected in the library TSPLIB contenct to 3038 nodes, showing the improvement of performance of parallel methods for their sequential versions, in executation time and results

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Due to the current need of the industry to integrate data of the beginning of production originating from of several sources and of transforming them in useful information for sockets of decisions, a search exists every time larger for systems of visualization of information that come to collaborate with that functionality. On the other hand, a common practice nowadays, due to the high competitiveness of the market, it is the development of industrial systems that possess characteristics of modularity, distribution, flexibility, scalability, adaptation, interoperability, reusability and access through web. Those characteristics provide an extra agility and a larger easiness in adapting to the frequent changes of demand of the market. Based on the arguments exposed above, this work consists of specifying a component-based architecture, with the respective development of a system based on that architecture, for the visualization of industrial data. The system was conceived to be capable to supply on-line information and, optionally, historical information of variables originating from of the beginning of production. In this work it is shown that the component-based architecture developed possesses the necessary requirements for the obtaining of a system robust, reliable and of easy maintenance, being, like this, in agreement with the industrial needs. The use of that architecture allows although components can be added, removed or updated in time of execution, through a manager of components through web, still activating more the adaptation process and updating of the system

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aerial activities, leaps and slaps with parts of the body in the surface of water, are part of the behavioral repertoire of several species of cetaceans. Among them, the spinner dolphin, Stenella longirostris, shows greater diversity in such behavior. For the spinner dolphins of Fernando de Noronha, the aerial activities are classified as vertical and horizontal, with eight patterns to be noted (tail slap, head slap, motor boating, partial leap, leap, spin, tail over head and tail over head with spin) discriminated between these categories. Such behaviors can be used as a parameter to identify behavioral changes, as well as patterns of daily and seasonal activity. In this manner, this study aimed to characterize the frequency in performance of such activity while the dolphins were within the Dolphin Bay of Fernando de Noronha, and verify possible daily and seasonal hourly fluctuations on such behaviors. The data analyzed in this study was acquired during the period of January 2006 through December 2010, totaling 1431 days of observation from land set point, with 113027 aerial activities registered, daily average of 72,27 (SD=96,10). During 5478h and 54 min of observation the horizontal aerial activity was the most observed and rotation was the most executed pattern. Greater frequency of execution of aerial activity was observed in adults, but for both adults and calves, was observed a predominance of horizontal activities, with spin being the pattern most executed. Positive correlation was observed between the amount of aerial activity performed and the number of animals inside the Bay. Hourly daily fluctuation was observed in the expression of aerial activities by spinner dolphins, and was observed a peak of activity between 8h and 8h59min for the overall frequency relative of aerial activities, as well as for the categories and patterns. Seasonal differences were observed between the rainy and dry season with the greater amount of activity being observed during the rainy season. Nevertheless, the same profile of frequency relative of aerial activity was observed in both seasons with the peak amount being during the same period. When discriminated the aerial activities in categories and patterns, for both seasons, there was a similar pattern of hourly fluctuation; for most of parameters, higher frequency relative of execution of aerial activity remain between 8h and 8h59min

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increase of elderly population in the world and in Brazil has indicated the necessity of health systems capable to evaluate, to diagnose and to intervene in the conditions of health and disease of that segment. During that stage of human development, physical and cognitive changes happen and they are capable to influence the functional acting. It s important to distinguish the limit between the normal and the pathological. Besides the common changes during the aging, biological rhithmicity changes happen, as alterations in the cycle vigil-sleep that can influence in certain tasks performance. This study aimed to verify the influence of the age, of the sex and of the hour in a maze test performance. Eighty individuals were evaluated, 40 youths (20 men and 20 women) and 40 senior (20 men and 20 women). They were separated in 2 different groups that were tested at 9:00 o clock and at 15:00 o clock. Initially they were submitted to health evaluation, cognitive evaluation and of sleep quality and chronotype. They were instructed to accomplish the maze test whose time of execution was timed and registered. Significant differences were observed according to age for the masculine group between elderly in the morning and in the afternoon and in the feminine group between youth and elderly in the test accomplished in the morning and in the afternoon. Significant differences were not observed according to sex and hour of the day and also between attempts. In compare between the 30th and the31st, accomplished in a 15minutes of interval, significant difference was observed just for the elderly group in the morning and in the afternoon. We observed significant correlations in the maze test performance with the cronotype, with the age, with the education and with the cognitive acting. The maze test was capable to detect differences between age in the acting profile and in the evaluation of the information maintenance after 15 minutes, however it was not possible to verify difference between sex and hour of the day. Finally the correlations of the maze test with another varied may indicate your importance as coadjutant instrument in those functions evaluation

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many challenges have been imposed on the middleware to support applications for digital TV because of the heterogeneity and resource constraints of execution platforms. In this scenario, the middleware must be highly configurable so that it can be customized to meet the requirements of applications and underlying platforms. This work aims to present the GingaForAll, a software product line developed for the Ginga - the middleware of the Brazilian Digital TV (SBTVD). GingaForAll adds the concepts of software product line, aspect orientation and model-driven development to allow: (i) the specification of the common characteristics and variables of the middleware, (ii) the modularization of crosscutting concerns - both mandatory and concepts variables - through aspects, (iii) the expression of concepts as a set of models that increase the level of abstraction and enables management of various software artifacts in terms of configurable models. This work presents the architecture of the software product line that implements such a tool and architecture that supports automatic customization of middleware. The work also presents a tool that implements the process of generating products GingaForAll

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Reconfigurable Computing is an intermediate solution at the resolution of complex problems, making possible to combine the speed of the hardware with the flexibility of the software. An reconfigurable architecture possess some goals, among these the increase of performance. The use of reconfigurable architectures to increase the performance of systems is a well known technology, specially because of the possibility of implementing certain slow algorithms in the current processors directly in hardware. Amongst the various segments that use reconfigurable architectures the reconfigurable processors deserve a special mention. These processors combine the functions of a microprocessor with a reconfigurable logic and can be adapted after the development process. Reconfigurable Instruction Set Processors (RISP) are a subgroup of the reconfigurable processors, that have as goal the reconfiguration of the instruction set of the processor, involving issues such formats, operands and operations of the instructions. This work possess as main objective the development of a RISP processor, combining the techniques of configuration of the set of executed instructions of the processor during the development, and reconfiguration of itself in execution time. The project and implementation in VHDL of this RISP processor has as intention to prove the applicability and the efficiency of two concepts: to use more than one set of fixed instructions, with only one set active in a given time, and the possibility to create and combine new instructions, in a way that the processor pass to recognize and use them in real time as if these existed in the fixed set of instruction. The creation and combination of instructions is made through a reconfiguration unit, incorporated to the processor. This unit allows the user to send custom instructions to the processor, so that later he can use them as if they were fixed instructions of the processor. In this work can also be found simulations of applications involving fixed and custom instructions and results of the comparisons between these applications in relation to the consumption of power and the time of execution, which confirm the attainment of the goals for which the processor was developed

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The final quality of the works accomplished by the building construction industry depends directly on the quality of the materials supplied and used during all their phases of execution. The federal government participation and several state programs have established conditions to stimulate and require the increment of the quality level in the building construction industry´s product chain. These programs aim at the product conformity to the technical standards. Within this context, the evaluation program of the ceramic product conformity in Rio Grande do Norte state is assessing the conformity degree to Brazilian Technical Standards of ceramic bricks and tiles made in the ceramic production area in the state. In this work, is determine the degree of conformity of the sealing ceramic bricks made by some companies in different areas of the state, such as Assú, São Gonçalo do Amarante, Apodi, Parelhas, São José do Mipibu e Macaíba. Using the technical standards as a point of reference, we attempted to reproduce in the laboratory the experimental procedures to the analysis execution, according to the specifications. It was possible to determine that none of the evaluated samples are in strict conformity with the current technical standards, what reflects the real situation of the products available on the market.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When a company desires to invest in a project, it must obtain resources needed to make the investment. The alternatives are using firm s internal resources or obtain external resources through contracts of debt and issuance of shares. Decisions involving the composition of internal resources, debt and shares in the total resources used to finance the activities of a company related to the choice of its capital structure. Although there are studies in the area of finance on the debt determinants of firms, the issue of capital structure is still controversial. This work sought to identify the predominant factors that determine the capital structure of Brazilian share capital, non-financial firms. This work was used a quantitative approach, with application of the statistical technique of multiple linear regression on data in panel. Estimates were made by the method of ordinary least squares with model of fixed effects. About 116 companies were selected to participate in this research. The period considered is from 2003 to 2007. The variables and hypotheses tested in this study were built based on theories of capital structure and in empirical researches. Results indicate that the variables, such as risk, size, and composition of assets and firms growth influence their indebtedness. The profitability variable was not relevant to the composition of indebtedness of the companies analyzed. However, analyzing only the long-term debt, comes to the conclusion that the relevant variables are the size of firms and, especially, the composition of its assets (tangibility).This sense, the smaller the size of the undertaking or the greater the representation of fixed assets in total assets, the greater its propensity to long-term debt. Furthermore, this research could not identify a predominant theory to explain the capital structure of Brazilian