979 resultados para Software Defined Networking SDN OpenFlow Rete Switch Router
Resumo:
At present, one of the main concerns of green network is to minimize the power consumption of network infrastructure. Surveys show that, the highest amount of power is consumed by the network devices during its runtime. However to control this power consumption it is important to know which factors has highest impact on this matter. This paper is focused on the measurement and modeling the power consumption of an Ethernet switch during its runtime considering various types of input parameters with all possible combinations. For the experiment, three input parameters are chosen. They are bandwidth, link load and number of connections. The output to be measured is the power consumption of the Ethernet switch. Due to the uncertain power consuming pattern of the Ethernet switch a fully-comprehensive experimental evaluation would require an unfeasible and cumbersome experimental phase. Because of that, design of experiment (DoE) method has been applied to obtain adequate information on the effects of each input parameters on the power consumption. The whole work consists of three parts. In the first part a test bed is planned with input parameters and the power consumption of the switch is measured. The second part is about generating a mathematical model with the help of design of experiment tools. This model can be used for measuring precise power consumption in different scenario and also pinpoint the parameters with higher influence in power consumption. And in the last part, the mathematical model is evaluated by comparing with the experimental values.
Resumo:
Software quality has become an important research subject, not only in the Information and Communication Technology spheres, but also in other industries at large where software is applied. Software quality is not a happenstance; it is defined, planned and created into the software product throughout the Software Development Life Cycle. The research objective of this study is to investigate the roles of human and organizational factors that influence software quality construction. The study employs the Straussian grounded theory. The empirical data has been collected from 13 software companies, and the data includes 40 interviews. The results of the study suggest that tools, infrastructure and other resources have a positive impact on software quality, but human factors involved in the software development processes will determine the quality of the products developed. On the other hand, methods of development were found to bring little effect on software quality. The research suggests that software quality is an information-intensive process whereby organizational structures, mode of operation, and information flow within the company variably affect software quality. The results also suggest that software development managers influence the productivity of developers and the quality of the software products. Several challenges of software testing that affect software quality are also brought to light. The findings of this research are expected to benefit the academic community and software practitioners by providing an insight into the issues pertaining to software quality construction undertakings.
Resumo:
Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.
Resumo:
Les logiciels sont de plus en plus complexes et leur développement est souvent fait par des équipes dispersées et changeantes. Par ailleurs, de nos jours, la majorité des logiciels sont recyclés au lieu d’être développés à partir de zéro. La tâche de compréhension, inhérente aux tâches de maintenance, consiste à analyser plusieurs dimensions du logiciel en parallèle. La dimension temps intervient à deux niveaux dans le logiciel : il change durant son évolution et durant son exécution. Ces changements prennent un sens particulier quand ils sont analysés avec d’autres dimensions du logiciel. L’analyse de données multidimensionnelles est un problème difficile à résoudre. Cependant, certaines méthodes permettent de contourner cette difficulté. Ainsi, les approches semi-automatiques, comme la visualisation du logiciel, permettent à l’usager d’intervenir durant l’analyse pour explorer et guider la recherche d’informations. Dans une première étape de la thèse, nous appliquons des techniques de visualisation pour mieux comprendre la dynamique des logiciels pendant l’évolution et l’exécution. Les changements dans le temps sont représentés par des heat maps. Ainsi, nous utilisons la même représentation graphique pour visualiser les changements pendant l’évolution et ceux pendant l’exécution. Une autre catégorie d’approches, qui permettent de comprendre certains aspects dynamiques du logiciel, concerne l’utilisation d’heuristiques. Dans une seconde étape de la thèse, nous nous intéressons à l’identification des phases pendant l’évolution ou pendant l’exécution en utilisant la même approche. Dans ce contexte, la prémisse est qu’il existe une cohérence inhérente dans les évènements, qui permet d’isoler des sous-ensembles comme des phases. Cette hypothèse de cohérence est ensuite définie spécifiquement pour les évènements de changements de code (évolution) ou de changements d’état (exécution). L’objectif de la thèse est d’étudier l’unification de ces deux dimensions du temps que sont l’évolution et l’exécution. Ceci s’inscrit dans notre volonté de rapprocher les deux domaines de recherche qui s’intéressent à une même catégorie de problèmes, mais selon deux perspectives différentes.
Resumo:
Page 1. Webhosting and Networking G. Santhosh Kumar, Dept. of Computer Science Cochin University of Science and Technology Page 2. Agenda What is a Network? Elements of a Network Hardware Software Ethernet Technology World Wide Web Setting up a Network Conclusion Page 3. What is a Network? An interconnected system of things or people Purpose of a Network? Resource Sharing Communication LANs have become the most popular form of Computer Networks Page 4. Principle of Locality of Reference Temporal Locality of Reference ...
Resumo:
Students taking the 20 credit version of the course (COMP6052) will work in groups of 6 to develop and design a new social networking tool/application/website. The teams will work on their design throughout the semester, and keep a design and development blog that will act as a digital portfolio of their work. At the end of the semester they will also be asked to submit an individual reflective summary that will outline their teams objectives and progress, their part in its progress, and a critical analysis of whether or not they were successful. At the end of the course teams will be asked to pitch their ideas to an interdisciplinary Dragon's Den style panel who will expect them to not only have created something that is technical viable, but will also want to see other economic, social, legal and ethical factors taken into consideration. In this presentation we explain the structure of the group project, what is expected in the blog, and explore some potential ideas to help students understand the scope of the work required. The outcome of the group project does not have to be a fully working piece of software, instead we are looking for a well developed idea that contains enough detail to be convincing to the panel.
Resumo:
Este plan exportador proyectado a un plazo de 3 años, servirá a ITAC IT APPLICATIONS CONSULTING S.A. para direccionar sus actividades en el mercado internacional para los años 2009, 2010, 2011. La prioridad de los 2 primeros años será el mejoramiento interno de la empresa, que será la aplicación de estrategias en diferentes campos como: capital humano, capital intelectual, capital cultural, crecimiento económico, estrategia comercial en el área internacional, construcción de capital financiero para la generación de ingresos. Para tener participación en mercados internacionales, mostrar su potencial exportador y lograr las expectativas de crecimiento de las ventas independientes a las obtenidas en el marcado local; pretende empezar en el año 2009, en el mercado Peruano con exportaciones por $36.000 USD correspondiente a 30 unidades, aumentando a $ 72000 USD con 60 unidades en el 2010 y $ 108000 USD y 90 unidades en el 2011. El Servicio a exportar fue “SecureFile” a partir del cual se definieron factores de éxito como lo son las ventajas competitivas del producto en sí mismo enumeradas a continuación: 1) Precio muy competitivo en el mercado, 2) Automatización del proceso de intercambio de información, 3) Software basado en estándares, 4) Se ejecuta en cualquier sistema operativo. A su vez se realizaron consultorías donde se diagnosticó todas las áreas de la empresa arrojando algunos resultados: La estructura organizacional esta bien definida, pero por su crecimiento y necesidad de incluir nuevo personal, no hay claridad en las funciones dentro del organigrama y depende totalmente de la dirección general. Por esto la gerencia debe estructurar mejor los departamentos comerciales creando nuevos cargos de acuerdo al proceso de internacionalización. Las políticas de personal se trabajan de manera informal con criterios validos para promover trabajadores (mérito, antigüedad, etc.), se realizan actualizaciones Tecnológicas mensuales, reconocimiento y participación en la empresa a sus funcionarios, excelentes relaciones personales que permiten hacer evaluaciones de desempeño acorde a las metas, gran variedad de motivación y responsabilidad social encaminada a los niños de bajos recursos. Aunque se debe crear un área de gestión humana y definir la frecuencia de las capacitaciones. Los ingresos son provenientes de la prestación de servicios de IT con incrementando de 256% durante los tres años anteriores para obtener $ 2`032.784.683 millones de pesos en el 2007. El nivel de endeudamiento también ha ido en aumento, por la necesidad de capacidad instalada, contrataciones de personal, el cumplimiento de requisitos del mercado y la necesidad generar buena imagen crediticia con entidades financieras. Cuenta con un musculo financiero para respaldar sus obligaciones inmediatas con $4,42 por $1 comprometido en el 2007 a pesar de ser el año con mayor nivel de endeudamiento arrojando pasivos corrientes por $127.715.281,37. Los cuatro socios cuentan con un comportamiento de 164,67% (2006) y 132,97% (2007) de rendimiento de sobre la inversión antes de impuestos. Para este año más del 95% de su información financiera y contable se maneja de manera sistematizada. El área Financiera de la empresa no es la más débil, pero no existe un departamento financiero con un solo responsable a la cabeza, por esto deben destinar un área separada de la administrativa con un asesor financiero que tenga disponibilidad de 100%. En el caso particular del proyecto de exportación los costos de producción se centran en SecureFile versión 3.0 que no representa costos marginales, ya que la replica de este software puede hacerse cuantas veces sea requerido sin afectar en ninguna proporción los costos. La empresa no utiliza un método formal para calcular sus costos de operación y desarrollo de programas. Pero ha desarrollado un sistema de evaluación de costos en tablas de Excel que de manera organizada logran un costeo acorde a sus necesidades específicas. Para la selección de los países: objetivo, alterno y contingente; se realizó una matriz de Selección de 6 países basados en la exigencia gubernamental en términos de seguridad de la información vía internet, y la percepción de los empresarios, competencia y otros factores económicos; arrojando como resultado a Perú, Costa Rica y México.
Resumo:
The General Packet Radio Service (GPRS) has been developed for the mobile radio environment to allow the migration from the traditional circuit switched connection to a more efficient packet based communication link particularly for data transfer. GPRS requires the addition of not only the GPRS software protocol stack, but also more baseband functionality for the mobile as new coding schemes have be en defined, uplink status flag detection, multislot operation and dynamic coding scheme detect. This paper concentrates on evaluating the performance of the GPRS coding scheme detection methods in the presence of a multipath fading channel with a single co-channel interferer as a function of various soft-bit data widths. It has been found that compressing the soft-bit data widths from the output of the equalizer to save memory can influence the likelihood decision of the coding scheme detect function and hence contribute to the overall performance loss of the system. Coding scheme detection errors can therefore force the channel decoder to either select the incorrect decoding scheme or have no clear decision which coding scheme to use resulting in the decoded radio block failing the block check sequence and contribute to the block error rate. For correct performance simulation, the performance of the full coding scheme detection must be taken into account.
Resumo:
The SPE taxonomy of evolving software systems, first proposed by Lehman in 1980, is re-examined in this work. The primary concepts of software evolution are related to generic theories of evolution, particularly Dawkins' concept of a replicator, to the hermeneutic tradition in philosophy and to Kuhn's concept of paradigm. These concepts provide the foundations that are needed for understanding the phenomenon of software evolution and for refining the definitions of the SPE categories. In particular, this work argues that a software system should be defined as of type P if its controlling stakeholders have made a strategic decision that the system must comply with a single paradigm in its representation of domain knowledge. The proposed refinement of SPE is expected to provide a more productive basis for developing testable hypotheses and models about possible differences in the evolution of E- and P-type systems than is provided by the original scheme. Copyright (C) 2005 John Wiley & Sons, Ltd.
Resumo:
A new electronic software distribution (ESD) life cycle analysis (LCA)methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative,physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO2e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO2e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.
Resumo:
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic threedimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants’ recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and euroscience research.
The Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions
Resumo:
Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft’s cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.
Resumo:
Key Performance Indicators (KPIs) are the main instruments of Business Performance Management. KPIs are the measures that are translated to both the strategy and the business process. These measures are often designed for an industry sector with the assumptions about business processes in organizations. However, the assumptions can be too incomplete to guarantee the required properties of KPIs. This raises the need to validate the properties of KPIs prior to their application to performance measurement. This paper applies the method called EXecutable Requirements Engineering Management and Evolution (EXTREME) for validation of the KPI definitions. EXTREME semantically relates the goal modeling, conceptual modeling and protocol modeling techniques into one methodology. The synchronous composition built into protocol modeling enables raceability of goals in protocol models and constructive definitions of a KPI. The application of the method clarifies the meaning of KPI properties and procedures of their assessment and validation.
Resumo:
This paper is about the use of natural language to communicate with computers. Most researches that have pursued this goal consider only requests expressed in English. A way to facilitate the use of several languages in natural language systems is by using an interlingua. An interlingua is an intermediary representation for natural language information that can be processed by machines. We propose to convert natural language requests into an interlingua [universal networking language (UNL)] and to execute these requests using software components. In order to achieve this goal, we propose OntoMap, an ontology-based architecture to perform the semantic mapping between UNL sentences and software components. OntoMap also performs component search and retrieval based on semantic information formalized in ontologies and rules.
Resumo:
Component-based software engineering has recently emerged as a promising solution to the development of system-level software. Unfortunately, current approaches are limited to specific platforms and domains. This lack of generality is particularly problematic as it prevents knowledge sharing and generally drives development costs up. In the past, we have developed a generic approach to component-based software engineering for system-level software called OpenCom. In this paper, we present OpenComL an instantiation of OpenCom to Linux environments and show how it can be profiled to meet a range of system-level software in Linux environments. For this, we demonstrate its application to constructing a programmable router platform and a middleware for parallel environments.