881 resultados para network performance
Resumo:
This dissertation consists of three independent studies, which study the nomological network of cultural intelligence (CI)—a relatively new construct within the fields of cross-cultural psychology and organizational psychology. Since the introduction of this construct, CI now has a generally accepted model comprised of four codependent subfactors. In addition, the focus of preliminary research within the field is on understanding the new construct’s correlates and outcomes. Thus, the goals for this dissertation were (a) to provide an additional evaluation of the factor structure of CI and (b) to examine further the correlates and outcomes that should theoretically be included in its nomological network. Specifically the model tests involved a one-factor, three-factor, and four-factor structure. The examined correlates of CI included the Big Five personality traits, core self-evaluation, social self-efficacy, self-monitoring, emotional intelligence, and cross-cultural experience. The examined outcomes also included overall performance, contextual performance, and cultural adaption in relation to CI. Thus, this dissertation has a series of 20 proposed and statistically evaluated hypotheses. The first study in this dissertation contained the summary of the extant CI literature via meta-analytic techniques. The outcomes of focus were significantly relevant to CI, while the CI correlates had more inconclusive results. The second and third studies contained original data collected from a sample of students and adult workers, respectively. In general, the results between these two studies were parallel. The four-factor structure of CI emerged as the best fit to the data, and several correlates and outcomes indicated significant relation to CI. In addition, the tested incremental validity of CI showed significant results emerging in both studies. Lastly, several exploratory analyses indicated the role of CI as a mediator between relevant antecedent and the outcome of cultural adaption, while the data supported the mediator role of CI. The final chapter includes a thorough discussion of practical implications as well as limitation to the research design.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
The main focus of this research is to design and develop a high performance linear actuator based on a four bar mechanism. The present work includes the detailed analysis (kinematics and dynamics), design, implementation and experimental validation of the newly designed actuator. High performance is characterized by the acceleration of the actuator end effector. The principle of the newly designed actuator is to network the four bar rhombus configuration (where some bars are extended to form an X shape) to attain high acceleration. Firstly, a detailed kinematic analysis of the actuator is presented and kinematic performance is evaluated through MATLAB simulations. A dynamic equation of the actuator is achieved by using the Lagrangian dynamic formulation. A SIMULINK control model of the actuator is developed using the dynamic equation. In addition, Bond Graph methodology is presented for the dynamic simulation. The Bond Graph model comprises individual component modeling of the actuator along with control. Required torque was simulated using the Bond Graph model. Results indicate that, high acceleration (around 20g) can be achieved with modest (3 N-m or less) torque input. A practical prototype of the actuator is designed using SOLIDWORKS and then produced to verify the proof of concept. The design goal was to achieve the peak acceleration of more than 10g at the middle point of the travel length, when the end effector travels the stroke length (around 1 m). The actuator is primarily designed to operate in standalone condition and later to use it in the 3RPR parallel robot. A DC motor is used to operate the actuator. A quadrature encoder is attached with the DC motor to control the end effector. The associated control scheme of the actuator is analyzed and integrated with the physical prototype. From standalone experimentation of the actuator, around 17g acceleration was achieved by the end effector (stroke length was 0.2m to 0.78m). Results indicate that the developed dynamic model results are in good agreement. Finally, a Design of Experiment (DOE) based statistical approach is also introduced to identify the parametric combination that yields the greatest performance. Data are collected by using the Bond Graph model. This approach is helpful in designing the actuator without much complexity.
Resumo:
Purpose: This paper aims to explore the role of internal and external knowledgebased linkages across the supply chain in achieving better operational performance. It investigates how knowledge is accumulated, shared, and applied to create organization-specific knowledge resources that increase and sustain the organization's competitive advantage. Design/methodology/approach: This paper uses a single case study with multiple, embedded units of analysis, and the social network analysis (SNA) to demonstrate the impact of internal and external knowledge-based linkages across multiple tiers in the supply chain on the organizational operational performance. The focal company of the case study is an Italian manufacturer supplying rubber components to European automotive enterprises. Findings: With the aid of the SNA, the internal knowledge-based linkages can be mapped and visualized. We found that the most central nodes having the most connections with other nodes in the linkages are the most crucial members in terms of knowledge exploration and exploitation within the organization. We also revealed that the effective management of external knowledge-based linkages, such as buyer company, competitors, university, suppliers, and subcontractors, can help improve the operational performance. Research limitations/implications: First, our hypothesis was tested on a single case. The analysis of multiple case studies using SNA would provide a deeper understanding of the relationship between the knowledge-based linkages at all levels of the supply chain and the integration of knowledge. Second, the static nature of knowledge flows was studied in this research. Future research could also consider ongoing monitoring of dynamic linkages and the dynamic characteristic of knowledge flows. Originality/value: To the best of our knowledge, the phrase 'knowledge-based linkages' has not been used in the literature and there is lack of investigation on the relationship between the management of internal and external knowledge-based linkages and the operational performance. To bridge the knowledge gap, this paper will show the importance of understanding the composition and characteristics of knowledge-based linkages and their knowledge nodes. In addition, this paper will show that effective management of knowledge-based linkages leads to the creation of new knowledge and improves organizations' operational performance.
Resumo:
Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.
Resumo:
Oscillating Water Column (OWC) is one type of promising wave energy devices due to its obvious advantage over many other wave energy converters: no moving component in sea water. Two types of OWCs (bottom-fixed and floating) have been widely investigated, and the bottom-fixed OWCs have been very successful in several practical applications. Recently, the proposal of massive wave energy production and the availability of wave energy have pushed OWC applications from near-shore to deeper water regions where floating OWCs are a better choice. For an OWC under sea waves, the air flow driving air turbine to generate electricity is a random process. In such a working condition, single design/operation point is nonexistent. To improve energy extraction, and to optimise the performance of the device, a system capable of controlling the air turbine rotation speed is desirable. To achieve that, this paper presents a short-term prediction of the random, process by an artificial neural network (ANN), which can provide near-future information for the control system. In this research, ANN is explored and tuned for a better prediction of the airflow (as well as the device motions for a wide application). It is found that, by carefully constructing ANN platform and optimizing the relevant parameters, ANN is capable of predicting the random process a few steps ahead of the real, time with a good accuracy. More importantly, the tuned ANN works for a large range of different types of random, process.
Resumo:
Backscatter communication is an emerging wireless technology that recently has gained an increase in attention from both academic and industry circles. The key innovation of the technology is the ability of ultra-low power devices to utilize nearby existing radio signals to communicate. As there is no need to generate their own energetic radio signal, the devices can benefit from a simple design, are very inexpensive and are extremely energy efficient compared with traditional wireless communication. These benefits have made backscatter communication a desirable candidate for distributed wireless sensor network applications with energy constraints.
The backscatter channel presents a unique set of challenges. Unlike a conventional one-way communication (in which the information source is also the energy source), the backscatter channel experiences strong self-interference and spread Doppler clutter that mask the information-bearing (modulated) signal scattered from the device. Both of these sources of interference arise from the scattering of the transmitted signal off of objects, both stationary and moving, in the environment. Additionally, the measurement of the location of the backscatter device is negatively affected by both the clutter and the modulation of the signal return.
This work proposes a channel coding framework for the backscatter channel consisting of a bi-static transmitter/receiver pair and a quasi-cooperative transponder. It proposes to use run-length limited coding to mitigate the background self-interference and spread-Doppler clutter with only a small decrease in communication rate. The proposed method applies to both binary phase-shift keying (BPSK) and quadrature-amplitude modulation (QAM) scheme and provides an increase in rate by up to a factor of two compared with previous methods.
Additionally, this work analyzes the use of frequency modulation and bi-phase waveform coding for the transmitted (interrogating) waveform for high precision range estimation of the transponder location. Compared to previous methods, optimal lower range sidelobes are achieved. Moreover, since both the transmitted (interrogating) waveform coding and transponder communication coding result in instantaneous phase modulation of the signal, cross-interference between localization and communication tasks exists. Phase discriminating algorithm is proposed to make it possible to separate the waveform coding from the communication coding, upon reception, and achieve localization with increased signal energy by up to 3 dB compared with previous reported results.
The joint communication-localization framework also enables a low-complexity receiver design because the same radio is used both for localization and communication.
Simulations comparing the performance of different codes corroborate the theoretical results and offer possible trade-off between information rate and clutter mitigation as well as a trade-off between choice of waveform-channel coding pairs. Experimental results from a brass-board microwave system in an indoor environment are also presented and discussed.
Resumo:
Wireless Sensor Networks (WSNs) are currently having a revolutionary impact in rapidly emerging wearable applications such as health and fitness monitoring amongst many others. These types of Body Sensor Network (BSN) applications require highly integrated wireless sensor devices for use in a wearable configuration, to monitor various physiological parameters of the user. These new requirements are currently posing significant design challenges from an antenna perspective. This work addresses several design challenges relating to antenna design for these types of applications. In this thesis, a review of current antenna solutions for WSN applications is first presented, investigating both commercial and academic solutions. Key design challenges are then identified relating to antenna size and performance. A detailed investigation of the effects of the human body on antenna impedance characteristics is then presented. A first-generation antenna tuning system is then developed. This system enables the antenna impedance to be tuned adaptively in the presence of the human body. Three new antenna designs are also presented. A compact, low-cost 433 MHz antenna design is first reported and the effects of the human body on the impedance of the antenna are investigated. A tunable version of this antenna is then developed, using a higher performance, second-generation tuner that is integrated within the antenna element itself, enabling autonomous tuning in the presence of the human body. Finally, a compact sized, dual-band antenna is reported that covers both the 433 MHz and 2.45 GHz bands to provide improved quality of service (QoS) in WSN applications. To date, state-of-the-art WSN devices are relatively simple in design with limited antenna options available, especially for the lower UHF bands. In addition, current devices have no capability to deal with changing antenna environments such as in wearable BSN applications. This thesis presents several contributions that advance the state-of-the-art in this area, relating to the design of miniaturized WSN antennas and the development of antenna tuning solutions for BSN applications.
Resumo:
Computer networks produce tremendous amounts of event-based data that can be collected and managed to support an increasing number of new classes of pervasive applications. Examples of such applications are network monitoring and crisis management. Although the problem of distributed event-based management has been addressed in the non-pervasive settings such as the Internet, the domain of pervasive networks has its own characteristics that make these results non-applicable. Many of these applications are based on time-series data that possess the form of time-ordered series of events. Such applications also embody the need to handle large volumes of unexpected events, often modified on-the-fly, containing conflicting information, and dealing with rapidly changing contexts while producing results with low-latency. Correlating events across contextual dimensions holds the key to expanding the capabilities and improving the performance of these applications. This dissertation addresses this critical challenge. It establishes an effective scheme for complex-event semantic correlation. The scheme examines epistemic uncertainty in computer networks by fusing event synchronization concepts with belief theory. Because of the distributed nature of the event detection, time-delays are considered. Events are no longer instantaneous, but duration is associated with them. Existing algorithms for synchronizing time are split into two classes, one of which is asserted to provide a faster means for converging time and hence better suited for pervasive network management. Besides the temporal dimension, the scheme considers imprecision and uncertainty when an event is detected. A belief value is therefore associated with the semantics and the detection of composite events. This belief value is generated by a consensus among participating entities in a computer network. The scheme taps into in-network processing capabilities of pervasive computer networks and can withstand missing or conflicting information gathered from multiple participating entities. Thus, this dissertation advances knowledge in the field of network management by facilitating the full utilization of characteristics offered by pervasive, distributed and wireless technologies in contemporary and future computer networks.
Resumo:
As an alternative to transverse spiral or hoop steel reinforcement, fiber reinforced polymers (FRPs) were introduced to the construction industry in the 1980's. The concept of concrete-filled FRP tube (CFFT) has raised great interest amongst researchers in the last decade. FRP tube can act as a pour form, protective jacket, and shear and flexural reinforcement for concrete. However, seismic performance of CFFT bridge substructure has not yet been fully investigated. Experimental work in this study included four two-column bent tests, several component tests and coupon tests. Four 1/6-scale bridge pier frames, consisting of a control reinforced concrete frame (RCF), glass FRP-concrete frame (GFF), carbon FRP-concrete frame (CFF), and hybrid glass/carbon FRP-concrete frame (HFF) were tested under reverse cyclic lateral loading with constant axial loads. Specimen GFF did not show any sign of cracking at a drift ratio as high as 15% with considerable loading capacity, whereas Specimen CFF showed that lowest ductility with similar load capacity as in Specimen GFF. FRP-concrete columns and pier cap beams were then cut from the pier frame specimens, and were tested again in three point flexure under monotonic loading with no axial load. The tests indicated that bonding between FRP and concrete and yielding of steel both affect the flexural strength and ductility of the components. The coupon tests were carried out to establish the tensile strength and elastic modulus of each FRP tube and the FRP mold for the pier cap beam in the two principle directions of loading. A nonlinear analytical model was developed to predict the load-deflection responses of the pier frames. The model was validated against test results. Subsequently, a parametric study was conducted with variables such as frame height to span ratio, steel reinforcement ratio, FRP tube thickness, axial force, and compressive strength of concrete. A typical bridge was also simulated under three different ground acceleration records and damping ratios. Based on the analytical damage index, the RCF bridge was most severely damaged, whereas the GFF bridge only suffered minor repairable damages. Damping ratio was shown to have a pronounced effect on FRP-concrete bridges, just the same as in conventional bridges. This research was part of a multi-university project, which is founded by the National Science Foundation (NSF) Network for Earthquake Engineering Simulation Research (NEESR) program.
Resumo:
In the frame of the transnational ALPS-GPSQUAKENET project, a component of the Alpine Space Programme of the European Community Initiative Programme (CIP) INTERREG III B, the Deutsches Geodätisches Forschungsinstitut (DGFI) in Munich, Germany, installed in 2005 five continuously operating permanent GPS stations located along the northern Alps boundary in Bavaria. The main objective of the ALPS-GPSQUAKENET project was to build-up a high-performance transnational space geodetic network of Global Positioning System (GPS) receivers in the Alpine region (the so-called Geodetic Alpine Integrated Network, GAIN). Data from this network allows for studying crustal deformations in near real-time to monitor Earthquake hazard and improve natural disaster prevention. The five GPS stations operatied by DGFI are mounted on concrete pillars attached to solid rock. The names of the stations are (from west to east) Hochgrat (HGRA), Breitenberg (BREI), Fahrenberg (FAHR), Hochries (HRIE) and Wartsteinkopf (WART). The provided data series start from October 7, 2005. Data are stored with a temporal spacing of 15 seconds in daily RINEX files.
Resumo:
A scenario-based two-stage stochastic programming model for gas production network planning under uncertainty is usually a large-scale nonconvex mixed-integer nonlinear programme (MINLP), which can be efficiently solved to global optimality with nonconvex generalized Benders decomposition (NGBD). This paper is concerned with the parallelization of NGBD to exploit multiple available computing resources. Three parallelization strategies are proposed, namely, naive scenario parallelization, adaptive scenario parallelization, and adaptive scenario and bounding parallelization. Case study of two industrial natural gas production network planning problems shows that, while the NGBD without parallelization is already faster than a state-of-the-art global optimization solver by an order of magnitude, the parallelization can improve the efficiency by several times on computers with multicore processors. The adaptive scenario and bounding parallelization achieves the best overall performance among the three proposed parallelization strategies.
Resumo:
With the emerging prevalence of smart phones and 4G LTE networks, the demand for faster-better-cheaper mobile services anytime and anywhere is ever growing. The Dynamic Network Optimization (DNO) concept emerged as a solution that optimally and continuously tunes the network settings, in response to varying network conditions and subscriber needs. Yet, the DNO realization is still at infancy, largely hindered by the bottleneck of the lengthy optimization runtime. This paper presents the design and prototype of a novel cloud based parallel solution that further enhances the scalability of our prior work on various parallel solutions that accelerate network optimization algorithms. The solution aims to satisfy the high performance required by DNO, preliminarily on a sub-hourly basis. The paper subsequently visualizes a design and a full cycle of a DNO system. A set of potential solutions to large network and real-time DNO are also proposed. Overall, this work creates a breakthrough towards the realization of DNO.
Resumo:
Based on an original and comprehensive database of all feature fiction films produced in Mercosur between 2004 and 2012, the paper analyses whether the Mercosur film industry has evolved towards an integrated and culturally more diverse market. It provides a summary of policy opportunities in terms of integration and diversity, emphasizing the limiter role played by regional policies. It then shows that although the Mercosur film industry remains rather disintegrated, it tends to become more integrated and culturally more diverse. From a methodological point of view, the combination of Social Network Analysis and the Stirling Model opens up interesting research tracks to analyse creative industries in terms of their market integration and their cultural diversity.