33 resultados para Kahler metrics
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
VALENTIM, R. A. M. ; SOUZA NETO, Plácido Antônio de. O impacto da utilização de design patterns nas métricas e estimativas de projetos de software: a utilização de padrões tem alguma influência nas estimativas?. Revista da FARN, Natal, v. 4, p. 63-74, 2006
Resumo:
Journal impact factors have become an important criterion to judge the quality of scientific publications over the years, influencing the evaluation of institutions and individual researchers worldwide. However, they are also subject to a number of criticisms. Here we point out that the calculation of a journal’s impact factor is mainly based on the date of publication of its articles in print form, despite the fact that most journals now make their articles available online before that date. We analyze 61 neuroscience journals and show that delays between online and print publication of articles increased steadily over the last decade. Importantly, such a practice varies widely among journals, as some of them have no delays, while for others this period is longer than a year. Using a modified impact factor based on online rather than print publication dates, we demonstrate that online-to-print delays can artificially raise a journal’s impact factor, and that this inflation is greater for longer publication lags. We also show that correcting the effect of publication delay on impact factors changes journal rankings based on this metric. We thus suggest that indexing of articles in citation databases and calculation of citation metrics should be based on the date of an article’s online appearance, rather than on that of its publication in print.
Resumo:
The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.
Resumo:
This study aims to investigate the influence of the asset class and the breakdown of tangibility as determinant factors of the capital structure of companies listed on the BM & FBOVESPA in the period of 2008-2012. Two current assets classes were composed and once they were grouped by liquidity, they were also analyzed by the financial institutions for credit granting: current resources (Cash, Bank and Financial Applications) and operations with duplicates (Stocks and Receivables). The breakdown of the tangible assets was made based on its main components provided as warrantees for loans like Machinery & Equipment and Land & Buildings. For an analysis extension, three metrics for leverage (accounting, financial and market) were applied and the sample was divided into economic sectors, adopted by BM&FBOVESPA. The data model in dynamic panel estimated by a systemic GMM of two levels was used in this study due its strength to problems of endogenous relationship as well as the omitted variables bias. The found results suggest that current resources are determinants of the capital structure possibly because they re characterized as proxies for financial solvency, being its relationship with debt positive. The sectorial analysis confirmed the results for current resources. The tangibility of assets has inverse proportional relationship with the leverage. As it is disintegrated in its main components, the significant and negative influence of machinery & equipment was more marked in the Industrial Goods sector. This result shows that, on average, the most specific assets from operating activities of a company compete for a less use of third party resources. As complementary results, it was observed that the leverage has persistence, which is linked with the static trade-off theory. Specifically for financial leverage, it was observed that the persistence is relevant when it is controlled for the lagged current assets classes variables. The proxy variable for growth opportunities, measured by the Market -to -Book, has the sign of its contradictory coefficient. The company size has a positive relationship with debt, in favor of static trade-off theory. Profitability is the most consistent variable in all the performed estimations, showing strong negative and significant relationship with leverage, as the pecking order theory predicts
Resumo:
(The Mark and Recapture Network: a Heliconius case study). The current pace of habitat destruction, especially in tropical landscapes, has increased the need for understanding minimum patch requirements and patch distance as tools for conserving species in forest remnants. Mark recapture and tagging studies have been instrumental in providing parameters for functional models. Because of their popularity, ease of manipulation and well known biology, butterflies have become model in studies of spatial structure. Yet, most studies on butterflies movement have focused on temperate species that live in open habitats, in which forest patches are barrier to movement. This study aimed to view and review data from mark-recapture as a network in two species of butterfly (Heliconius erato and Heliconius melpomene). A work of marking and recapture of the species was carried out in an Atlantic forest reserve located about 20km from the city of Natal (RN). Mark recapture studies were conducted in 3 weekly visits during January-February and July-August in 2007 and 2008. Captures were more common in two sections of the dirt road, with minimal collection in the forest trail. The spatial spread of captures was similar in the two species. Yet, distances between recaptures seem to be greater for Heliconius erato than for Heliconius melpomene. In addition, the erato network is more disconnected, suggesting that this specie has shorter traveling patches. Moving on to the network, both species have similar number of links (N) and unweighed vertices (L). However, melpomene has a weighed network 50% more connections than erato. These network metrics suggest that erato has more compartmentalized network and restricted movement than melpomene. Thus, erato has a larger number of disconnected components, nC, in the network, and a smaller network diameter. The frequency distribution of network connectivity for both species was better explained by a Power-law than by a random, Poissom distribution, showing that the Power-law provides a better fit than the Poisson for both species. Moreover, the Powerlaw erato is much better adjusted than in melpomene, which should be linked to the small movements that erato makes in the network
Resumo:
The main goal of the present study is to propose a methodological approach to the teaching of Geometry and, in particular, to the construction of the concepts of circle (circumference) and ellipse by 7th and 8th grade students. In order to aid the students in the construction of these concepts, we developed a module based on mathematical modeling, and both Urban Geometry (Taxicab Geometry) and Isoperimetric Geometry. Our analysis was based on Jean Piaget's Equilibrium Theory. Emphasizing the use of intuition based on accumulated past experiences, the students were encouraged to come up with a hypothesis, try it out, discuss it with their peers, and derive conclusions. Although the graphs of circles and ellipses assume different shapes in Urban and Isoperimetric Geometry than they do in the standard Euclidian Geometry, their definitions are identical regardless of the metric used. Thus, by comparing the graphs produced in the different metrics, the students were able to consolidate their understanding of these concepts. The intervention took place in a series of small group activities. At the end of the study, the 53 seventh grade and the 55 eighth grade students had a better understanding of the concepts of circle and ellipse
Resumo:
This work addresses issues related to analysis and development of multivariable predictive controllers based on bilinear multi-models. Linear Generalized Predictive Control (GPC) monovariable and multivariable is shown, and highlighted its properties, key features and applications in industry. Bilinear GPC, the basis for the development of this thesis, is presented by the time-step quasilinearization approach. Some results are presented using this controller in order to show its best performance when compared to linear GPC, since the bilinear models represent better the dynamics of certain processes. Time-step quasilinearization, due to the fact that it is an approximation, causes a prediction error, which limits the performance of this controller when prediction horizon increases. Due to its prediction error, Bilinear GPC with iterative compensation is shown in order to minimize this error, seeking a better performance than the classic Bilinear GPC. Results of iterative compensation algorithm are shown. The use of multi-model is discussed in this thesis, in order to correct the deficiency of controllers based on single model, when they are applied in cases with large operation ranges. Methods of measuring the distance between models, also called metrics, are the main contribution of this thesis. Several application results in simulated distillation columns, which are close enough to actual behaviour of them, are made, and the results have shown satisfactory
Resumo:
Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time
Resumo:
This paper presents a new multi-model technique of dentification in ANFIS for nonlinear systems. In this technique, the structure used is of the fuzzy Takagi-Sugeno of which the consequences are local linear models that represent the system of different points of operation and the precursors are membership functions whose adjustments are realized by the learning phase of the neuro-fuzzy ANFIS technique. The models that represent the system at different points of the operation can be found with linearization techniques like, for example, the Least Squares method that is robust against sounds and of simple application. The fuzzy system is responsible for informing the proportion of each model that should be utilized, using the membership functions. The membership functions can be adjusted by ANFIS with the use of neural network algorithms, like the back propagation error type, in such a way that the models found for each area are correctly interpolated and define an action of each model for possible entries into the system. In multi-models, the definition of action of models is known as metrics and, since this paper is based on ANFIS, it shall be denominated in ANFIS metrics. This way, ANFIS metrics is utilized to interpolate various models, composing a system to be identified. Differing from the traditional ANFIS, the created technique necessarily represents the system in various well defined regions by unaltered models whose pondered activation as per the membership functions. The selection of regions for the application of the Least Squares method is realized manually from the graphic analysis of the system behavior or from the physical characteristics of the plant. This selection serves as a base to initiate the linear model defining technique and generating the initial configuration of the membership functions. The experiments are conducted in a teaching tank, with multiple sections, designed and created to show the characteristics of the technique. The results from this tank illustrate the performance reached by the technique in task of identifying, utilizing configurations of ANFIS, comparing the developed technique with various models of simple metrics and comparing with the NNARX technique, also adapted to identification
Resumo:
The greater part of monitoring onshore Oil and Gas environment currently are based on wireless solutions. However, these solutions have a technological configuration that are out-of-date, mainly because analog radios and inefficient communication topologies are used. On the other hand, solutions based in digital radios can provide more efficient solutions related to energy consumption, security and fault tolerance. Thus, this paper evaluated if the Wireless Sensor Network, communication technology based on digital radios, are adequate to monitoring Oil and Gas onshore wells. Percent of packets transmitted with successful, energy consumption, communication delay and routing techniques applied to a mesh topology will be used as metrics to validate the proposal in the different routing techniques through network simulation tool NS-2
Resumo:
Ensuring the dependability requirements is essential for the industrial applications since faults may cause failures whose consequences result in economic losses, environmental damage or hurting people. Therefore, faced from the relevance of topic, this thesis proposes a methodology for the dependability evaluation of industrial wireless networks (WirelessHART, ISA100.11a, WIA-PA) on early design phase. However, the proposal can be easily adapted to maintenance and expansion stages of network. The proposal uses graph theory and fault tree formalism to create automatically an analytical model from a given wireless industrial network topology, where the dependability can be evaluated. The evaluation metrics supported are the reliability, availability, MTTF (mean time to failure), importance measures of devices, redundancy aspects and common cause failures. It must be emphasized that the proposal is independent of any tool to evaluate quantitatively the target metrics. However, due to validation issues it was used a tool widely accepted on academy for this purpose (SHARPE). In addition, an algorithm to generate the minimal cut sets, originally applied on graph theory, was adapted to fault tree formalism to guarantee the scalability of methodology in wireless industrial network environments (< 100 devices). Finally, the proposed methodology was validate from typical scenarios found in industrial environments, as star, line, cluster and mesh topologies. It was also evaluated scenarios with common cause failures and best practices to guide the design of an industrial wireless network. For guarantee scalability requirements, it was analyzed the performance of methodology in different scenarios where the results shown the applicability of proposal for networks typically found in industrial environments
Resumo:
In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method
Resumo:
This work deals with experimental studies about VoIP conections into WiFi 802.11b networks with handoff. Indoor and outdoor network experiments are realised to take measurements for the QoS parameters delay, throughput, jitter and packt loss. The performance parameters are obtained through the use of software tools Ekiga, Iperf and Wimanager that assure, respectvely, VoIP conection simulation, trafic network generator and metric parameters acquisition for, throughput, jitter and packt loss. The avarage delay is obtained from the measured throughput and the concept of packt virtual transmition time. The experimental data are validated based on de QoS level for each metric parameter accepted as adequated by the specialized literature
Resumo:
The evolution of automation in recent years made possible the continuous monitoring of the processes of industrial plants. With this advance, the amount of information that automation systems are subjected to increased significantly. The alarms generated by the monitoring equipment are a major contributor to this increase, and the equipments are usually deployed in industrial plants without a formal methodology, which entails an increase in the number of alarms generated, thus overloading the alarm system and therefore the operators of such plants. In this context, the works of alarm management comes up with the objective of defining a formal methodology for installation of new equipment and detect problems in existing settings. This thesis aims to propose a set of metrics for the evaluation of alarm systems already deployed, so that you can identify the health of this system by analyzing the proposed indices and comparing them with parameters defined in the technical norms of alarm management. In addition, the metrics will track the work of alarm management, verifying if it is improving the quality of the alarm system. To validate the proposed metrics, data from actual process plants of the petrochemical industry were used
Resumo:
The use of wireless sensor and actuator networks in industry has been increasing past few years, bringing multiple benefits compared to wired systems, like network flexibility and manageability. Such networks consists of a possibly large number of small and autonomous sensor and actuator devices with wireless communication capabilities. The data collected by sensors are sent directly or through intermediary nodes along the network to a base station called sink node. The data routing in this environment is an essential matter since it is strictly bounded to the energy efficiency, thus the network lifetime. This work investigates the application of a routing technique based on Reinforcement Learning s Q-Learning algorithm to a wireless sensor network by using an NS-2 simulated environment. Several metrics like energy consumption, data packet delivery rates and delays are used to validate de proposal comparing it with another solutions existing in the literature