952 resultados para OpenFlow, SDN, Software-Defined Networking, Cloud
Resumo:
As an important component in collaborative natural resource management and nonprofit governance, social capital is expected to be related to variations in the performance of land trusts. Land trusts are charitable organizations that work to conserve private land locally, regionally, or nationally. The purpose of this paper is to identify the level of structural and cognitive social capital among local land trusts, and how these two types of social capital relate to the perceived success of land trusts. The analysis integrates data for land trusts operating in the U.S. south-central Appalachian region, which includes western North Carolina, southwest Virginia, and east Tennessee. We use factor analysis to elicit different dimensions of cognitive social capital, including cooperation among board members, shared values, common norms, and communication effectiveness. Measures of structural social capital include the size and diversity of organizational networks of both land trusts and their board members. Finally, a hierarchical linear regression model is employed to estimate how cognitive and structural social capital measures, along with other organizational and individual-level attributes, relate to perceptions of land trust success, defined here as achievement of the land trusts’ mission, conservation, and financial goals. Results show that the diversity of organizational partnerships, cooperation, and shared values among land trust board members are associated with higher levels of perceived success. Organizational capacity, land trust accreditation, volunteerism, and financial support are also important factors influencing perceptions of success among local, nonprofit land trusts.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
The evolution and maturation of Cloud Computing created an opportunity for the emergence of new Cloud applications. High-performance Computing, a complex problem solving class, arises as a new business consumer by taking advantage of the Cloud premises and leaving the expensive datacenter management and difficult grid development. Standing on an advanced maturing phase, today’s Cloud discarded many of its drawbacks, becoming more and more efficient and widespread. Performance enhancements, prices drops due to massification and customizable services on demand triggered an emphasized attention from other markets. HPC, regardless of being a very well established field, traditionally has a narrow frontier concerning its deployment and runs on dedicated datacenters or large grid computing. The problem with common placement is mainly the initial cost and the inability to fully use resources which not all research labs can afford. The main objective of this work was to investigate new technical solutions to allow the deployment of HPC applications on the Cloud, with particular emphasis on the private on-premise resources – the lower end of the chain which reduces costs. The work includes many experiments and analysis to identify obstacles and technology limitations. The feasibility of the objective was tested with new modeling, architecture and several applications migration. The final application integrates a simplified incorporation of both public and private Cloud resources, as well as HPC applications scheduling, deployment and management. It uses a well-defined user role strategy, based on federated authentication and a seamless procedure to daily usage with balanced low cost and performance.
Resumo:
After a productivity decrease of established national export industries in Finland such as mobile and paper industries, innovative, smaller companies with the intentions to internationalize right from the start have been proliferating. For software companies early internationalization is an especially good opportunity, as Internet usage becomes increasingly homogeneous across borders and software products often do not need a physical distribution channel. Globalization also makes Finnish companies turn to unfamiliar export markets like Latin America, a very untraditional market for Finns. Relationships consisting of Finnish and Latin American business partners have therefore not been widely studied, especially from a new-age software company’s perspective. To study these partnerships, relationship marketing theory was taken into the core of the study, as its practice focuses mainly on establishing and maintaining relationships with stakeholders at a profit, so that the objectives of all parties are met, which is done by a mutual exchange and fulfillment of promises. The most important dimensions of relationship marketing were identified as trust, commitment and attraction, which were then focused on, as the study aims to understand the implications Latin American business culture has for the understanding, and hence, effective application of relationship marketing in the Latin American market. The question to be answered consecutively was how should the dimensions of trust, commitment and attraction be understood in business relationships in Latin America? The study was conducted by first joining insights given by Latin American business culture literature with overall theories on the three dimensions. Through pattern matching, these insights were compared to empirical evidence collected from business professionals of the Latin American market and from the experiences of Finnish software businesses that had recently expanded into the market. What was found was that previous literature on Latin American business culture had already named many implications for the relationship marketing dimensions that were relevant also for small Finnish software firms on the market. However, key findings also presented important new drivers for the three constructs. Local presence in the area where the Latin American partner is located was found to drive or enhance trust, commitment and attraction. High-frequency follow up procedures were in turn found to drive commitment and attraction. Both local presence and follow up were defined according to the respective evidence in the study. Also, in the context of Finnish software firms in relationships with Latin American partners, the national origins or the foreignness of the Finnish party was seen to enhance trust and attraction in the relationship
Resumo:
In this thesis, tool support is addressed for the combined disciplines of Model-based testing and performance testing. Model-based testing (MBT) utilizes abstract behavioral models to automate test generation, thus decreasing time and cost of test creation. MBT is a functional testing technique, thereby focusing on output, behavior, and functionality. Performance testing, however, is non-functional and is concerned with responsiveness and stability under various load conditions. MBPeT (Model-Based Performance evaluation Tool) is one such tool which utilizes probabilistic models, representing dynamic real-world user behavior patterns, to generate synthetic workload against a System Under Test and in turn carry out performance analysis based on key performance indicators (KPI). Developed at Åbo Akademi University, the MBPeT tool is currently comprised of a downloadable command-line based tool as well as a graphical user interface. The goal of this thesis project is two-fold: 1) to extend the existing MBPeT tool by deploying it as a web-based application, thereby removing the requirement of local installation, and 2) to design a user interface for this web application which will add new user interaction paradigms to the existing feature set of the tool. All phases of the MBPeT process will be realized via this single web deployment location including probabilistic model creation, test configurations, test session execution against a SUT with real-time monitoring of user configurable metric, and final test report generation and display. This web application (MBPeT Dashboard) is implemented with the Java programming language on top of the Vaadin framework for rich internet application development. The Vaadin framework handles the complicated web communications processes and front-end technologies, freeing developers to implement the business logic as well as the user interface in pure Java. A number of experiments are run in a case study environment to validate the functionality of the newly developed Dashboard application as well as the scalability of the solution implemented in handling multiple concurrent users. The results support a successful solution with regards to the functional and performance criteria defined, while improvements and optimizations are suggested to increase both of these factors.
Resumo:
Software engineering best practices allow significantly improving the software development. However, the implementation of best practices requires skilled professionals, financial investment and technical support to facilitate implementation and achieve the respective improvement. In this paper we proposes a protocol to design techniques to implement best practices of software engineering. The protocol includes the identification and selection of process to improve, the study of standards and models, identification of best practices associated with the process and the possible implementation techniques. In addition, technical design activities are defined in order to create or adapt the techniques of implementing best practices for software development.
Resumo:
Los sistemas fotovoltaicos son fuentes emergentes de energías renovables que generan electricidad a partir de la radiación solar. El monitoreo de los sistemas fotovoltaicos aislados proporciona información necesaria que permite a sus propietarios mantener, operar y controlar estos sistemas, reduciendo los costes de operación y evitando indeseadas interrupciones en el suministro eléctrico de zonas aisladas. En este artículo, se propone el desarrollo de una plataforma para el monitoreo de sistemas fotovoltaicos aislados en el Ecuador con el objetivo fundamental de desarrollar una solución escalable, basada en el uso de software libre, en el empleo de sensores de bajo consumo y en el desarrollo de servicios web en la modalidad ‘Software as a Service’ (SaaS) para el procesamiento, gestión y publicación de información registrada y la creación de un innovador centro de control solar fotovoltaico en el Ecuador.
Resumo:
En la actualidad, el uso del Cloud Computing se está incrementando y existen muchos proveedores que ofrecen servicios que hacen uso de esta tecnología. Uno de ellos es Amazon Web Services, que a través de su servicio Amazon EC2, nos ofrece diferentes tipos de instancias que podemos utilizar según nuestras necesidades. El modelo de negocio de AWS se basa en el pago por uso, es decir, solo realizamos el pago por el tiempo que se utilicen las instancias. En este trabajo se implementa en Amazon EC2, una aplicación cuyo objetivo es extraer de diferentes fuentes de información, los datos de las ventas realizadas por las editoriales y librerías de España. Estos datos son procesados, cargados en una base de datos y con ellos se generan reportes estadísticos, que ayudarán a los clientes a tomar mejores decisiones. Debido a que la aplicación procesa una gran cantidad de datos, se propone el desarrollo y validación de un modelo, que nos permita obtener una ejecución óptima en Amazon EC2. En este modelo se tienen en cuenta el tiempo de ejecución, el coste por uso y una métrica de coste/rendimiento. Adicionalmente, se utilizará la tecnología de contenedores Docker para llevar a cabo un caso específico del despliegue de la aplicación.
Resumo:
With the proliferation of new mobile devices and applications, the demand for ubiquitous wireless services has increased dramatically in recent years. The explosive growth in the wireless traffic requires the wireless networks to be scalable so that they can be efficiently extended to meet the wireless communication demands. In a wireless network, the interference power typically grows with the number of devices without necessary coordination among them. On the other hand, large scale coordination is always difficult due to the low-bandwidth and high-latency interfaces between access points (APs) in traditional wireless networks. To address this challenge, cloud radio access network (C-RAN) has been proposed, where a pool of base band units (BBUs) are connected to the distributed remote radio heads (RRHs) via high bandwidth and low latency links (i.e., the front-haul) and are responsible for all the baseband processing. But the insufficient front-haul link capacity may limit the scale of C-RAN and prevent it from fully utilizing the benefits made possible by the centralized baseband processing. As a result, the front-haul link capacity becomes a bottleneck in the scalability of C-RAN. In this dissertation, we explore the scalable C-RAN in the effort of tackling this challenge. In the first aspect of this dissertation, we investigate the scalability issues in the existing wireless networks and propose a novel time-reversal (TR) based scalable wireless network in which the interference power is naturally mitigated by the focusing effects of TR communications without coordination among APs or terminal devices (TDs). Due to this nice feature, it is shown that the system can be easily extended to serve more TDs. Motivated by the nice properties of TR communications in providing scalable wireless networking solutions, in the second aspect of this dissertation, we apply the TR based communications to the C-RAN and discover the TR tunneling effects which alleviate the traffic load in the front-haul links caused by the increment of TDs. We further design waveforming schemes to optimize the downlink and uplink transmissions in the TR based C-RAN, which are shown to improve the downlink and uplink transmission accuracies. Consequently, the traffic load in the front-haul links is further alleviated by the reducing re-transmissions caused by transmission errors. Moreover, inspired by the TR-based C-RAN, we propose the compressive quantization scheme which applies to the uplink of multi-antenna C-RAN so that more antennas can be utilized with the limited front-haul capacity, which provide rich spatial diversity such that the massive TDs can be served more efficiently.
Resumo:
The current infrastructure as a service (IaaS) cloud systems, allow users to load their own virtual machines. However, most of these systems do not provide users with an automatic mechanism to load a network topology of virtual machines. In order to specify and implement the network topology, we use software switches and routers as network elements. Before running a group of virtual machines, the user needs to set up the system once to specify a network topology of virtual machines. Then, given the user’s request for running a specific topology, our system loads the appropriate virtual machines (VMs) and also runs separated VMs as software switches and routers. Furthermore, we have developed a manager that handles physical hardware failure situations. This system has been designed in order to allow users to use the system without knowing all the internal technical details.
Resumo:
Surgical interventions are usually performed in an operation room; however, access to the information by the medical team members during the intervention is limited. While in conversations with the medical staff, we observed that they attach significant importance to the improvement of the information and communication direct access by queries during the process in real time. It is due to the fact that the procedure is rather slow and there is lack of interaction with the systems in the operation room. These systems can be integrated on the Cloud adding new functionalities to the existing systems the medical expedients are processed. Therefore, such a communication system needs to be built upon the information and interaction access specifically designed and developed to aid the medical specialists. Copyright 2014 ACM.
Resumo:
Ultimamente si stanno sviluppando tecnologie per rendere più efficiente la virtualizzazione a livello di sistema operativo, tra cui si cita la suite Docker, che permette di gestire processi come se fossero macchine virtuali. Inoltre i meccanismi di clustering, come Kubernetes, permettono di collegare macchine multiple, farle comunicare tra loro e renderle assimilabili ad un server monolitico per l'utente esterno. Il connubio tra virtualizzazione a livello di sistema operativo e clustering permette di costruire server potenti quanto quelli monolitici ma più economici e possono adattarsi meglio alle richieste esterne. Data l'enorme mole di dati e di potenza di calcolo necessaria per gestire le comunicazioni e le interazioni tra utenti e servizi web, molte imprese non possono permettersi investimenti su un server proprietario e la sua manutenzione, perciò affittano le risorse necessarie che costituiscono il cosiddetto "cloud", cioè l'insieme di server che le aziende mettono a disposizione dei propri clienti. Il trasferimento dei servizi da macchina fisica a cloud ha modificato la visione che si ha dei servizi stessi, infatti non sono più visti come software monolitici ma come microservizi che interagiscono tra di loro. L'infrastruttura di comunicazione che permette ai microservizi di comunicare è chiamata service mesh e la sua suddivisione richiama la tecnologia SDN. È stato studiato il comportamento del software di service mesh Istio installato in un cluster Kubernetes. Sono state raccolte metriche su memoria occupata, CPU utilizzata, pacchetti trasmessi ed eventuali errori e infine latenza per confrontarle a quelle ottenute da un cluster su cui non è stato installato Istio. Lo studio dimostra che, in un cluster rivolto all'uso in produzione, la service mesh offerta da Istio fornisce molti strumenti per il controllo della rete a scapito di una richiesta leggermente più alta di risorse hardware.
Resumo:
In the last few years, mobile wireless technology has gone through a revolutionary change. Web-enabled devices have evolved into essential tools for communication, information, and entertainment. The fifth generation (5G) of mobile communication networks is envisioned to be a key enabler of the next upcoming wireless revolution. Millimeter wave (mmWave) spectrum and the evolution of Cloud Radio Access Networks (C-RANs) are two of the main technological innovations of 5G wireless systems and beyond. Because of the current spectrum-shortage condition, mmWaves have been proposed for the next generation systems, providing larger bandwidths and higher data rates. Consequently, new radio channel models are being developed. Recently, deterministic ray-based models such as Ray-Tracing (RT) are getting more attractive thanks to their frequency-agility and reliable predictions. A modern RT software has been calibrated and used to analyze the mmWave channel. Knowledge of the electromagnetic properties of materials is therefore essential. Hence, an item-level electromagnetic characterization of common construction materials has been successfully achieved to obtain information about their complex relative permittivity. A complete tuning of the RT tool has been performed against indoor and outdoor measurement campaigns at 27 and 38 GHz, setting the basis for the future development of advanced beamforming techniques which rely on deterministic propagation models (as RT). C-RAN is a novel mobile network architecture which can address a number of challenges that network operators are facing in order to meet the continuous customers’ demands. C-RANs have already been adopted in advanced 4G deployments; however, there are still some issues to deal with, especially considering the bandwidth requirements set by the forthcoming 5G systems. Open RAN specifications have been proposed to overcome the new 5G challenges set on C-RAN architectures, including synchronization aspects. In this work it is described an FPGA implementation of the Synchronization Plane for an O-RAN-compliant radio system.
Resumo:
Fear of Missing Out (FoMO) is a pervasive apprehension that others might be having rewarding experiences from which one is absent. Consequently, individuals experiencing FoMO wish to stay constantly in contact with what others are doing and engage with social networking sites for this purpose. In recent times, FoMO has received increased attention from psychological research, as a minority of users experiencing high levels of FoMO - particularly young people - might develop a problematic social networking site use, defined as the maladaptive and excessive use of social networking sites, resulting in symptoms associated with other addictions. According to the theoretical framework of the Interaction of Person-Affect-Cognition- Execution (I-PACE) model, FoMO and certain motives for use may foster problematic use in individuals who display unmet psychosocial needs. However, to date, the I-PACE model has only conceptualized the general higher-order mechanisms related to the development of problematic use. Consistently, the overall purpose of this dissertation was to deepen the understanding of the mediating role of FoMO between specific predisposing variables and problematic social networking sites use. Adopting a psychological approach, two empirical and exploratory cross-sectional studies, conceived as independent research, were conducted through path analysis.