899 resultados para Computer Networks and Communications
Resumo:
The tightening competition and increasing dynamism have created an emerging need for flexible asset management. This means that the changes of market demand should be responded to with adjustments in the amount of assets tied to the balance sheets of companies. On the other hand, industrial maintenance has recently experienced drastic changes, which have led to an increase in the number of maintenance networks (consisting of customer companies that buy maintenance services, as well as various supplier companies) and inter-organizational partnerships. However, the research on maintenance networks has not followed the changes in the industry. Instead, there is a growing need for new ways of collaboration between partnering companies to enhance the competitiveness of the whole maintenance network. In addition, it is more and more common for companies to pursue lean operations in their businesses. This thesis shows how flexible asset management can increase the profitability of maintenance companies and networks under dynamic operating conditions, and how the additional value can then be shared between the network partners. Firstly, I have conducted a systematic literature review to identify what kind of requirements for asset management models are set by the increasing dynamism. Then I have responded to these requirements by constructing an analytical model for flexible asset management, linking asset management to the profitability and financial state of a company. The thesis uses the model to show how flexible asset management can increase profitability in maintenance companies and networks, and how the created value can be shared in the networks to reach a win-win situation. The research indicates that the existing models for asset management are heterogeneous by nature due to the various definitions of ‘asset management’. I conclude that there is a need for practical asset management models which address assets comprehensively with an inter-organizational, strategic view. The comprehensive perspective, taking all kinds of asset types into account, is needed to integrate the research on asset management with the strategic management of companies and networks. I will show that maintenance companies can improve their profitability by increasing the flexibility of their assets. In maintenance networks, reorganizing the ownership of the assets among the different network partners can create additional value. Finally, I will introduce flexible asset management contracts for maintenance networks. These contracts address the value sharing related to reorganizing the ownership of assets according to the principles of win-win situations.
Resumo:
In this master’s thesis, wind speeds and directions were modeled with the aim of developing suitable models for hourly, daily, weekly and monthly forecasting. Artificial Neural Networks implemented in MATLAB software were used to perform the forecasts. Three main types of artificial neural network were built, namely: Feed forward neural networks, Jordan Elman neural networks and Cascade forward neural networks. Four sub models of each of these neural networks were also built, corresponding to the four forecast horizons, for both wind speeds and directions. A single neural network topology was used for each of the forecast horizons, regardless of the model type. All the models were then trained with real data of wind speeds and directions collected over a period of two years in the municipal region of Puumala in Finland. Only 70% of the data was used for training, validation and testing of the models, while the second last 15% of the data was presented to the trained models for verification. The model outputs were then compared to the last 15% of the original data, by measuring the mean square errors and sum square errors between them. Based on the results, the feed forward networks returned the lowest generalization errors for hourly, weekly and monthly forecasts of wind speeds; Jordan Elman networks returned the lowest errors when used for forecasting of daily wind speeds. Cascade forward networks gave the lowest errors when used for forecasting daily, weekly and monthly wind directions; Jordan Elman networks returned the lowest errors when used for hourly forecasting. The errors were relatively low during training of the models, but shot up upon simulation with new inputs. In addition, a combination of hyperbolic tangent transfer functions for both hidden and output layers returned better results compared to other combinations of transfer functions. In general, wind speeds were more predictable as compared to wind directions, opening up opportunities for further research into building better models for wind direction forecasting.
Resumo:
Cross-sector collaboration and partnerships have become an emerging and desired strategy in addressing huge social and environmental challenges. Despite its popularity, cross-sector collaboration management has proven to be very challenging. Even though cross-sector collaboration and partnership management have been widely studied and discussed in recent years, their effectiveness as well as their ability to create value with respect to the problems they address has remained very challenging. There is little or no evidence of their ability to create value. Regarding all these challenges, this study aims to explore how to manage cross-sector collaborations and partnerships to be able to improve their effectiveness and to create more value for all partners involved in collaboration as well as for customers. The thesis is divided into two parts. The first part comprises an overview of relevant literature (including strategic management, value networks and value creation theories), followed by presenting the results of the whole thesis and the contribution made by the study. The second part consists of six research publications, including both quantitative and qualitative studies. The chosen research strategy is triangulation, as the study includes four types of triangulation: (1) theoretical triangulation, (2) methodological triangulation, (3) data triangulation and (4) researcher triangulation. Two publications represent conceptual development, which are based on secondary data research. One publication is a quantitative study, carried out through a survey. The other three publications represent qualitative studies, based on case studies, where data was collected through interviews and workshops, with participation of managers from all three sectors: public, private and the third (nonprofit). The study consolidates the field of “strategic management of value networks,” which is proposed to be applied in the context of cross-sector collaboration and partnerships, with the aim of increasing their effectiveness and the process of value creation. Furthermore, the study proposes a first definition for the strategic management of value networks. The study also proposes and develops two strategy tools that are recommended to be used for the strategic management of value networks in cross-sector collaboration and partnerships. Taking a step forward, the study implements the strategy tools in practice, aiming to show and to demonstrate how new value can be created by using the developed strategy tools for the strategic management of value networks. This study makes four main contributions. (1) First, it brings a theoretical contribution by providing new insights and consolidating the field of strategic management of value networks, also proposing a first definition for the strategic management of value networks. (2) Second, the study makes a methodical contribution by proposing and developing two strategy tools for value networks of cross-sector collaboration: (a) value network mapping, a method that allows us to assess the current and the potential value network and (b) the Value Network Scorecard, a method of performance measurement and performance prediction in cross-sector collaboration. (3) Third, the study has managerial implications, offering new solutions and empirical evidence on how to increase the effectiveness of cross-sector collaboration and also allow managers to understand how new value can be created in cross-sector partnerships and how to get the full potential of collaboration. (4) And fourth, the study also has practical implications, allowing managers to understand how to use in practice the strategy tools developed in this study, providing discussions on the limitations regarding the proposed tools as well as general limitations involved in the study.
Resumo:
The effects oftwo types of small-group communication, synchronous computer-mediated and face-to-face, on the quantity and quality of verbal output were con^ared. Quantity was deiSned as the number of turns taken per minute, the number of Analysis-of-Speech units (AS-units) produced per minute, and the number ofwords produced per minute. Quality was defined as the number of words produced per AS-unit. In addition, the interaction of gender and type of communication was explored for any differences that existed in the output produced. Questionnaires were also given to participants to determine attitudes toward computer-mediated and face-to-face communication. Thirty intermediate-level students fi-om the Intensive English Language Program (lELP) at Brock University participated in the study, including 15 females and 15 males. Nonparametric tests, including the Wilcoxon matched-pairs test, Mann-Whitney U test, and Friedman test were used to test for significance at the p < .05 level. No significant differences were found in the effects of computer-mediated and face-to-face communication on the output produced during follow-up speaking sessions. However, the quantity and quality of interaction was significantly higher during face-to-face sessions than computer-mediated sessions. No significant differences were found in the output produced by males and females in these 2 conditions. While participants felt that the use of computer-mediated communication may aid in the development of certain language skills, they generally preferred face-to-face communication. These results differed fi-om previous studies that found a greater quantity and quality of output in addition to a greater equality of interaction produced during computer-mediated sessions in comparison to face-to-face sessions (Kern, 1995; Warschauer, 1996).
Resumo:
The introduction of computer and communications technology, and particularly the internet, into education has opened up some new possibilities for teaching and learning. Courses designed and delivered in an online environment offer the possibility of highly interactive and individually focussed teaching and learning experiences. However, online courses also present new challenges for both teachers and students. A qualitative study was conducted to explore teachers' perceptions about the similarities and differences in teaching in the online and face-to-face (F2F) environments. Focus group discussions were held with 5 teachers; 2 teachers were interviewed in depth. The participants, 3 female and 2 male, were full-time teachers from a large College of Applied Arts & Technology in southern Ontario. Each of them had over 10 years of F2F teaching experience and each had been involved in the development and teaching of at least one online course. i - -; The study focussed on how teaching in the online environment compares with teaching in the F2F environment, what roles teachers and students adopt in each setting, what learning communities mean online and F2F and how they are developed, and how institutional policies, procedures, and infrastructure affect teaching and learning F2F and online. This study was emic in nature, that is the teachers' words determine the themes identified throughout the study. The factors identified as affecting teaching in an online environment included teacher issues such as course design, motivation to teach online, teaching style, role, characteristics or skills, and strategies. Student issues as perceived by the teachers included learning styles, role, and characteristics or skills. As well, technology issues such as a reliable infrastructure, clear role and responsibilities for maintaining the infrastructure, support, and multimedia capability affected teaching online. Finally, administrative policies and procedures, including teacher selection and training, registration and scheduling procedures, intellectual property and workload policies, and the development and communication of a comprehensive strategic plan were found to impact on teaching online. The teachers shared some of the benefits they perceived about teaching online as well as some of the challenges they had faced and challenges they perceived students had faced online. Overall, the teachers feh that there were more similarities than differences in teaching between the two environments, with the main differences being the change from F2F verbal interactions involving body language to online written interactions without body language cues, and the fundamental reliance on technology in the online environment. These findings support previous research in online teaching and learning, and add teachers' perspectives on the factors that stay the same and the factors that change when moving from a F2F environment to an online environment.
Resumo:
The Two-Connected Network with Bounded Ring (2CNBR) problem is a network design problem addressing the connection of servers to create a survivable network with limited redirections in the event of failures. Particle Swarm Optimization (PSO) is a stochastic population-based optimization technique modeled on the social behaviour of flocking birds or schooling fish. This thesis applies PSO to the 2CNBR problem. As PSO is originally designed to handle a continuous solution space, modification of the algorithm was necessary in order to adapt it for such a highly constrained discrete combinatorial optimization problem. Presented are an indirect transcription scheme for applying PSO to such discrete optimization problems and an oscillating mechanism for averting stagnation.
Resumo:
The (n, k)-star interconnection network was proposed in 1995 as an attractive alternative to the n-star topology in parallel computation. The (n, k )-star has significant advantages over the n-star which itself was proposed as an attractive alternative to the popular hypercube. The major advantage of the (n, k )-star network is its scalability, which makes it more flexible than the n-star as an interconnection network. In this thesis, we will focus on finding graph theoretical properties of the (n, k )-star as well as developing parallel algorithms that run on this network. The basic topological properties of the (n, k )-star are first studied. These are useful since they can be used to develop efficient algorithms on this network. We then study the (n, k )-star network from algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms for basic communication, prefix computation, and sorting, etc. A literature review of the state-of-the-art in relation to the (n, k )-star network as well as some open problems in this area are also provided.
Resumo:
The hyper-star interconnection network was proposed in 2002 to overcome the drawbacks of the hypercube and its variations concerning the network cost, which is defined by the product of the degree and the diameter. Some properties of the graph such as connectivity, symmetry properties, embedding properties have been studied by other researchers, routing and broadcasting algorithms have also been designed. This thesis studies the hyper-star graph from both the topological and algorithmic point of view. For the topological properties, we try to establish relationships between hyper-star graphs with other known graphs. We also give a formal equation for the surface area of the graph. Another topological property we are interested in is the Hamiltonicity problem of this graph. For the algorithms, we design an all-port broadcasting algorithm and a single-port neighbourhood broadcasting algorithm for the regular form of the hyper-star graphs. These algorithms are both optimal time-wise. Furthermore, we prove that the folded hyper-star, a variation of the hyper-star, to be maixmally fault-tolerant.
Resumo:
Lors de ces dix dernières années, le coût de la maintenance des systèmes orientés objets s'est accru jusqu' à compter pour plus de 70% du coût total des systèmes. Cette situation est due à plusieurs facteurs, parmi lesquels les plus importants sont: l'imprécision des spécifications des utilisateurs, l'environnement d'exécution changeant rapidement et la mauvaise qualité interne des systèmes. Parmi tous ces facteurs, le seul sur lequel nous ayons un réel contrôle est la qualité interne des systèmes. De nombreux modèles de qualité ont été proposés dans la littérature pour contribuer à contrôler la qualité. Cependant, la plupart de ces modèles utilisent des métriques de classes (nombre de méthodes d'une classe par exemple) ou des métriques de relations entre classes (couplage entre deux classes par exemple) pour mesurer les attributs internes des systèmes. Pourtant, la qualité des systèmes par objets ne dépend pas uniquement de la structure de leurs classes et que mesurent les métriques, mais aussi de la façon dont celles-ci sont organisées, c'est-à-dire de leur conception, qui se manifeste généralement à travers les patrons de conception et les anti-patrons. Dans cette thèse nous proposons la méthode DEQUALITE, qui permet de construire systématiquement des modèles de qualité prenant en compte non seulement les attributs internes des systèmes (grâce aux métriques), mais aussi leur conception (grâce aux patrons de conception et anti-patrons). Cette méthode utilise une approche par apprentissage basée sur les réseaux bayésiens et s'appuie sur les résultats d'une série d'expériences portant sur l'évaluation de l'impact des patrons de conception et des anti-patrons sur la qualité des systèmes. Ces expériences réalisées sur 9 grands systèmes libres orientés objet nous permettent de formuler les conclusions suivantes: • Contre l'intuition, les patrons de conception n'améliorent pas toujours la qualité des systèmes; les implantations très couplées de patrons de conception par exemple affectent la structure des classes et ont un impact négatif sur leur propension aux changements et aux fautes. • Les classes participantes dans des anti-atrons sont beaucoup plus susceptibles de changer et d'être impliquées dans des corrections de fautes que les autres classes d'un système. • Un pourcentage non négligeable de classes sont impliquées simultanément dans des patrons de conception et dans des anti-patrons. Les patrons de conception ont un effet positif en ce sens qu'ils atténuent les anti-patrons. Nous appliquons et validons notre méthode sur trois systèmes libres orientés objet afin de démontrer l'apport de la conception des systèmes dans l'évaluation de la qualité.
Resumo:
Cette thèse analyse les négociations interculturelles des Gens du Centre (groupe amazonien multi-ethnique) avec les discours universels de droits humains et de développement mobilisés par l’État colombien. L’analyse se concentre sur le Plan de sauvegarde ethnique Witoto chapitre Leticia (ESP), qui est un des 73 plans formulés et implémentés par l’État colombien pour reconnaître les droits des peuples autochtones en danger par le déplacement forcé causé par les conflits armés internes. J’analyse l’ESP à travers la notion de friction (Tsing, 2005) qui fait référence aux caractéristiques complexes, inégalitaires et changeantes des rencontres contemporaines entre les différences des savoirs locaux et globaux. Mon analyse se base aussi sur des approches foucaldiennes et/ou subalternes de pouvoir comme la recherche anticoloniale et de la décolonisation, les perspectives critiques et contre-hégémoniques des droits humains, le post-développement, et les critiques du féminisme au développement. L’objectif de la thèse est d’analyser les savoirs (concepts de loi, de justice et de développement); les logiques de pensée (pratiques, épistémologies, rôles et espaces pour partager et produire des savoirs); et les relations de pouvoir (formes de leadership, associations, réseaux, et formes d’empowerment et disempowerment) produits et recréés par les Gens du Centre au sein des frictions avec les discours de droits humains et du développement. La thèse introduit comment la région habitée par les Gens du Centre (le Milieu Amazone transfrontalier) a été historiquement connectée aux relations inégalitaires de pouvoir qui influencent les luttes actuelles de ce groupe autochtone pour la reconnaissance de leurs droits à travers l’ESP. L’analyse se base à la fois sur une recherche documentaire et sur deux terrains ethnographiques, réalisés selon une perspective critique et autoréflexive. Ma réflexion méthodologique explore comment la position des chercheurs sur le terrain influence le savoir ethnographique et peut contribuer à la création des relations interculturelles inclusives, flexibles et connectées aux besoins des groupes locaux. La section analytique se concentre sur comment le pouvoir circule simultanément à travers des échelles nationale, régionale et locale dans l’ESP. J’y analyse comment ces formes de pouvoir produisent des sujets individuels et collectifs et s’articulent à des savoirs globaux ou locaux pour donner lieu à de nouvelles formes d’exclusion ou d’émancipation des autochtones déplacés. Les résultats de la recherche suggèrent que les Gens du Centre approchent le discours des droits humains à travers leurs savoirs autochtones sur la « loi de l’origine ». Cette loi établit leur différence culturelle comme étant à la base du processus de reconnaissance de leurs droits comme peuple déplacé. D’ailleurs, les Gens du Centre approprient les discours et les projets de développement à travers la notion d’abondance qui, comprise comme une habileté collective qui connecte la spiritualité, les valeurs culturelles, et les rôles de genre, contribue à assurer l’existence physique et culturelle des groupes autochtones. Ma thèse soutient que, même si ces savoirs et logiques de pensée autochtones sont liés à des inégalités et à formes de pouvoir local, ils peuvent contribuer à des pratiques de droits humains et de développement plurielles, égalitaires et inclusives.
Resumo:
Speech signals are one of the most important means of communication among the human beings. In this paper, a comparative study of two feature extraction techniques are carried out for recognizing speaker independent spoken isolated words. First one is a hybrid approach with Linear Predictive Coding (LPC) and Artificial Neural Networks (ANN) and the second method uses a combination of Wavelet Packet Decomposition (WPD) and Artificial Neural Networks. Voice signals are sampled directly from the microphone and then they are processed using these two techniques for extracting the features. Words from Malayalam, one of the four major Dravidian languages of southern India are chosen for recognition. Training, testing and pattern recognition are performed using Artificial Neural Networks. Back propagation method is used to train the ANN. The proposed method is implemented for 50 speakers uttering 20 isolated words each. Both the methods produce good recognition accuracy. But Wavelet Packet Decomposition is found to be more suitable for recognizing speech because of its multi-resolution characteristics and efficient time frequency localizations
Resumo:
In Wireless Sensor Networks (WSN), neglecting the effects of varying channel quality can lead to an unnecessary wastage of precious battery resources and in turn can result in the rapid depletion of sensor energy and the partitioning of the network. Fairness is a critical issue when accessing a shared wireless channel and fair scheduling must be employed to provide the proper flow of information in a WSN. In this paper, we develop a channel adaptive MAC protocol with a traffic-aware dynamic power management algorithm for efficient packet scheduling and queuing in a sensor network, with time varying characteristics of the wireless channel also taken into consideration. The proposed protocol calculates a combined weight value based on the channel state and link quality. Then transmission is allowed only for those nodes with weights greater than a minimum quality threshold and nodes attempting to access the wireless medium with a low weight will be allowed to transmit only when their weight becomes high. This results in many poor quality nodes being deprived of transmission for a considerable amount of time. To avoid the buffer overflow and to achieve fairness for the poor quality nodes, we design a Load prediction algorithm. We also design a traffic aware dynamic power management scheme to minimize the energy consumption by continuously turning off the radio interface of all the unnecessary nodes that are not included in the routing path. By Simulation results, we show that our proposed protocol achieves a higher throughput and fairness besides reducing the delay
Resumo:
Page 1. Webhosting and Networking G. Santhosh Kumar, Dept. of Computer Science Cochin University of Science and Technology Page 2. Agenda What is a Network? Elements of a Network Hardware Software Ethernet Technology World Wide Web Setting up a Network Conclusion Page 3. What is a Network? An interconnected system of things or people Purpose of a Network? Resource Sharing Communication LANs have become the most popular form of Computer Networks Page 4. Principle of Locality of Reference Temporal Locality of Reference ...
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
This paper presents a new charging scheme for cost distribution along a point-to-multipoint connection when destination nodes are responsible for the cost. The scheme focus on QoS considerations and a complete range of choices is presented. These choices go from a safe scheme for the network operator to a fair scheme to the customer. The in-between cases are also covered. Specific and general problems, like the incidence of users disconnecting dynamically is also discussed. The aim of this scheme is to encourage the users to disperse the resource demand instead of having a large number of direct connections to the source of the data, which would result in a higher than necessary bandwidth use from the source. This would benefit the overall performance of the network. The implementation of this task must balance between the necessity to offer a competitive service and the risk of not recovering such service cost for the network operator. Throughout this paper reference to multicast charging is made without making any reference to any specific category of service. The proposed scheme is also evaluated with the criteria set proposed in the European ATM charging project CANCAN