886 resultados para open service system
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
Today’s commercial web sites are under heavy user load and they are expected to be operational and available at all times. Distributed system architectures have been developed to provide a scalable and failure tolerant high availability platform for these web based services. The focus on this thesis was to specify and implement resilient and scalable locally distributed high availability system architecture for a web based service. Theory part concentrates on the fundamental characteristics of distributed systems and presents common scalable high availability server architectures that are used in web based services. In the practical part of the thesis the implemented new system architecture is explained. Practical part also includes two different test cases that were done to test the system's performance capacity.
Resumo:
Specific demand for service concept creation has come about from industrial organizations’ desire to find new and innovative ways to differentiate their offering by increasing the level of customer services. Providers of professional services have also demanded new concepts and approaches for their businesses as these industries have become increasingly competitive. Firms are now seeking better ways to understand and segment their customers, to ensure the delivery of quality services and strengthen their position in aggressively competitive markets. This thesis is intended to provide management consulting companies with a new work method that enables service concept creation in a business-to-business environment. The model defines the service concept as a combination of delivered value and the target customers; the third-dimension operating model is brought to the new system in testing of the service concept creation guidelines in the target organization. For testing, service concepts for a management consulting company are created. Service concepts are designed to serve as a solid foundation for further service improvements. Recommendations and proposals for further action related to service development in the target organization are presented, and recommendations to further improve the model created are given.
Resumo:
The thesis examined service offering development of an air freight carrier from the customers' point of view. The study was limited to the biggest business clients of the carrier. Service offering development can be divided into service concept, process and system. The thesis was based on these three themes and quality. Compared to product development, systematic and well-structured service development has been studied little, especially in business market. However, service development is a current issue. Due to growing competition companies should carefully listen to their clients' needs and respond to them by offering right services. Methodology of the thesis is qualitative and representatives of three forwarding companies were interviewed. It was found out that the forwarding companies consider themselves as partners of the airline. In addition to general reliability, the customers value the most fluent terminal processes and electronic communication.
Resumo:
How to recognize, announce and analyze incidents in internal medicine units is a daily challenge that is taught to all hospital staff. It allows suggesting useful improvements for patients, as well as for the medical department and the institution. Here is presented the assessment made in the CHUV internal medicine department one year after the beginning of the institutional procedure which promotes an open process regarding communication and risk management. The department of internal medicine underlines the importance of feedback to the reporters, ensures the staff of regular follow-up concerning the measures being taken and offers to external reporters such as general practioners the possibility of using this reporting system too.
Resumo:
INTRODUCTION: Dispatch-assisted cardiopulmonary resuscitation (DA-CPR) plays a key role in out-of-hospital cardiac arrests. We sought to measure dispatchers' performances in a criteria-based system in recognizing cardiac arrest and delivering DA-CPR. Our secondary purpose was to identify the factors that hampered dispatchers' identification of cardiac arrests, the factors that prevented them from proposing DA-CPR, and the factors that prevented bystanders from performing CPR. METHODS AND RESULTS: We reviewed dispatch recordings for 1254 out-of-hospital cardiac arrests occurring between January 1, 2011 and December 31, 2013. Dispatchers correctly identified cardiac arrests in 71% of the reviewed cases and 84% of the cases in which they were able to assess for patient consciousness and breathing. The median time to recognition of the arrest was 60s. The median time to start chest compression was 220s. CONCLUSIONS: This study demonstrates that performances from a criteria-based dispatch system can be similar to those from a medical-priority dispatch system regarding out-of-hospital cardiac arrest (OHCA) time recognition and DA-CPR delivery. Agonal breathing recognition remains the weakest link in this sensitive task in both systems. It is of prime importance that all dispatch centers tend not only to implement DA-CPR but also to have tools to help them reach this objective, as today it should be mandatory to offer this service to the community. In order to improve benchmarking opportunities, we completed previously proposed performance standards as propositions.
Resumo:
Open educational resources (OER) promise increased access, participation, quality, and relevance, in addition to cost reduction. These seemingly fantastic promises are based on the supposition that educators and learners will discover existing resources, improve them, and share the results, resulting in a virtuous cycle of improvement and re-use. By anecdotal metrics, existing web scale search is not working for OER. This situation impairs the cycle underlying the promise of OER, endangering long term growth and sustainability. While the scope of the problem is vast, targeted improvements in areas of curation, indexing, and data exchange can improve the situation, and create opportunities for further scale. I explore the way the system is currently inadequate, discuss areas for targeted improvement, and describe a prototype system built to test these ideas. I conclude with suggestions for further exploration and development.
Resumo:
The paper presents the results of the piloting or pilot test in a virtual classroom. This e-portfolio was carried out in the 2005-2006 academic year, with students of the Doctorate in Information Society, at the Open University of Catalonia. The electronic portfolio is a strategy for competence based assessment. This experience shows the types of e-portfolios, where students show their work without interactions, and apply the competence-based learning theories in an interactive portfolio system. The real process of learning is developed in the competency based system, the portfolio not only is a basic bio document, has become a real space for learning with competence model. The paper brings out new ideas and possibilities: the competence-based learning promotes closer relationships between universities and companies and redesigns the pedagogic act.
Resumo:
Peer-reviewed
Resumo:
Peer-reviewed
Resumo:
Wireless community networks became popular in uniting people with common interests. This thesis presents authentication and authorization service for a wireless community network using captive portal approach including ability to authenticate clients from associated networks thereby combining multiple communities in a syndicate. The system is designed and implemented to be reliable, scalable and flexible. Moreover, the result includes software management system, which automatically performs software updates at network’s access points. Future development of the system can be concentrated on an improvement of the software management system.
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.
Resumo:
Network virtualisation is considerably gaining attentionas a solution to ossification of the Internet. However, thesuccess of network virtualisation will depend in part on how efficientlythe virtual networks utilise substrate network resources.In this paper, we propose a machine learning-based approachto virtual network resource management. We propose to modelthe substrate network as a decentralised system and introducea learning algorithm in each substrate node and substrate link,providing self-organization capabilities. We propose a multiagentlearning algorithm that carries out the substrate network resourcemanagement in a coordinated and decentralised way. The taskof these agents is to use evaluative feedback to learn an optimalpolicy so as to dynamically allocate network resources to virtualnodes and links. The agents ensure that while the virtual networkshave the resources they need at any given time, only the requiredresources are reserved for this purpose. Simulations show thatour dynamic approach significantly improves the virtual networkacceptance ratio and the maximum number of accepted virtualnetwork requests at any time while ensuring that virtual networkquality of service requirements such as packet drop rate andvirtual link delay are not affected.
Resumo:
Academics and policy makers are increasingly shifting the debate concerning the best form of public service provision beyond the traditional dilemma between pure public and pure private delivery modes, because, among other reasons, there is a growing body of evidence that casts doubt on the existence of systematic cost savings from privatization, while any competition seems to be eroded over time. In this paper we compare the relative merits of public and private delivery within a mixed delivery system. We study the role played by ownership, transaction costs, and competition on local public service delivery within the same jurisdiction. Using a stochastic cost frontier, we analyze the public-private urban bus system in the Barcelona Metropolitan Area. Our results suggest that private firms tendering the service have higher delivery costs than those incurred by the public firm, especially when transaction costs are taken into account. Tenders, therefore, do not help to reduce delivery costs. Our results suggest that under a mixed delivery scheme, which permits the co-existence of public and private production, the metropolitan government and the regulator can use private delivery to contain costs in the public firm and, at the same time, benefit from the greater flexibility of private firms for dealing with events not provided for under contract.
Resumo:
The possibilities and expansion of the use of Web 2.0 has opened up a world of possibilities in online learning. In spite of the integration of these tools in education major changes are required in the educational design of instructional processes.This paper presents an educational experience conducted by the Open University of Catalonia using the social network Facebook for the purpose of testing a learning model that uses a participation and collaboration methodology among users based on the use of open educational resources.- The aim of the experience is to test an Open Social Learning (OSL) model, understood to be a virtual learning environment open to the Internet community, based on the use of open resources and on a methodology focused on the participation and collaboration of users in the construction of knowledge.- The topic chosen for this experience in Facebook was 2.0 Journeys: online tools and resources. The objective of this 5 weeks course was to provide students with resources for managing the various textual, photographic, audiovisual and multimedia materials resulting from a journey.- The most important changes in the design and development of a course based on OSL are the role of the teacher, the role of the student, the type of content and the methodology:- The teacher mixes with the participants, guiding them and offering the benefit of his/her experience and knowledge.- Students learn through their participation and collaboration with a mixed group of users.- The content is open and editable under different types of license that specify the level of accessibility.- The methodology of the course was based on the creation of a learning community able to self-manage its learning process. For this a facilitator was needed and also a central activity was established for people to participate and contribute in the community.- We used an ethnographic methodology and also questionnaires to students in order to acquire results regarding the quality of this type of learning experience.- Some of the data obtained raised questions to consider for future designs of educational situations based on OSL:- Difficulties in breaking the facilitator-centred structure- Change in the time required to adapt to the system and to achieve the objectives- Lack of commitment with free courses- The trend to return to traditional ways of learning- Accreditation- This experience has taught all of us that education can happen any time and in any place but not in any way.