21 resultados para Mobile operating system
em Aston University Research Archive
Resumo:
Groupe Spécial Mobile (GSM) has been developed as the pan-European second generation of digital mobile systems. GSM operates in the 900 MHz frequency band and employs digital technology instead of the analogue technology of its predecessors. Digital technology enables the GSM system to operate in much smaller zones in comparison with the analogue systems. The GSM system will offer greater roaming facilities to its subscribers, extended throughout the countries that have installed the system. The GSM system could be seen as a further enhancement to European integration. GSM has adopted a contention-based protocol for multipoint-to-point transmission. In particular, the slotted-ALOHA medium access protocol is used to coordinate the transmission of the channel request messages between the scattered mobile stations. Collision still happens when more than one mobile station having the same random reference number attempts to transmit on the same time-slot. In this research, a modified version of this protocol has been developed in order to reduce the number of collisions and hence increase the random access channel throughput compared to the existing protocol. The performance evaluation of the protocol has been carried out using simulation methods. Due to the growing demand for mobile radio telephony as well as for data services, optimal usage of the scarce availability radio spectrum is becoming increasingly important. In this research, a protocol has been developed whereby the number of transmitted information packets over the GSM system is increased without any additional increase of the allocated radio spectrum. Simulation results are presented to show the improvements achieved by the proposed protocol. Cellular mobile radio networks commonly respond to an increase in the service demand by using smaller coverage areas. As a result, the volume of the signalling exchanges increases. In this research, a proposal for interconnecting the various entitles of the mobile radio network over the future broadband networks based on the IEEE 802.6 Metropolitan Area Network (MAN) is outlined. Simulation results are presented to show the benefits achieved by interconnecting these entities over the broadband Networks.
Resumo:
Communication and portability are the two main problems facing the user. An operating system, called PORTOS, was developed to solve these problems for users on dedicated microcomputer systems. Firstly, an interface language was defined, according to the anticipated requirements and behaviour of its potential users. Secondly, the PORTOS operating system was developed as a processor for this language. The system is currently running on two minicomputers of highly different architectures. PORTOS achieves its portability through its high-level design, and implementation in CORAL66. The interface language consists of a set of user cotnmands and system responses. Although only a subset has been implemented, owing to time and manpower constraints, promising results were achieved regarding the usability of the language, and its portability.
Resumo:
Emerging markets have recently been experiencing a dramatic increased in the number of mobile phone per capita. M-government has, hence, been heralded as an opportunity to leap-frog the technology cycle and provide cheaper and more inclusive and services to all. This chapter explores, within an emerging market context, the legitimacy and resistance facing civil servants’ at the engagement stage with m-government activities and the direct implication for resource management. Thirty in depth interview, in Turkey, are drawn-upon with key ICT civil servant in local organizations. The findings show that three types of resources are perceived as central namely: (i) diffusion of information management, (ii) operating system resource management and (iii) human resource management. The main evidence suggests that legitimacy for each resource management, at local level, is an ongoing struggle where all groups deploy multiples forms of resistance. Overall, greater attention in the resource management strategy for m-government application needs to be devoted to enablers such as civil servants rather than the final consumers or citizens.
Resumo:
With the advent of distributed computer systems with a largely transparent user interface, new questions have arisen regarding the management of such an environment by an operating system. One fertile area of research is that of load balancing, which attempts to improve system performance by redistributing the workload submitted to the system by the users. Early work in this field concentrated on static placement of computational objects to improve performance, given prior knowledge of process behaviour. More recently this has evolved into studying dynamic load balancing with process migration, thus allowing the system to adapt to varying loads. In this thesis, we describe a simulated system which facilitates experimentation with various load balancing algorithms. The system runs under UNIX and provides functions for user processes to communicate through software ports; processes reside on simulated homogeneous processors, connected by a user-specified topology, and a mechanism is included to allow migration of a process from one processor to another. We present the results of a study of adaptive load balancing algorithms, conducted using the aforementioned simulated system, under varying conditions; these results show the relative merits of different approaches to the load balancing problem, and we analyse the trade-offs between them. Following from this study, we present further novel modifications to suggested algorithms, and show their effects on system performance.
Resumo:
The computer systems of today are characterised by data and program control that are distributed functionally and geographically across a network. A major issue of concern in this environment is the operating system activity of resource management for different processors in the network. To ensure equity in load distribution and improved system performance, load balancing is often undertaken. The research conducted in this field so far, has been primarily concerned with a small set of algorithms operating on tightly-coupled distributed systems. More recent studies have investigated the performance of such algorithms in loosely-coupled architectures but using a small set of processors. This thesis describes a simulation model developed to study the behaviour and general performance characteristics of a range of dynamic load balancing algorithms. Further, the scalability of these algorithms are discussed and a range of regionalised load balancing algorithms developed. In particular, we examine the impact of network diameter and delay on the performance of such algorithms across a range of system workloads. The results produced seem to suggest that the performance of simple dynamic policies are scalable but lack the load stability of more complex global average algorithms.
Resumo:
A small lathe has been modified to work under microprocessor control to enhance the facilities which the lathe offers and provide a wider operating range with relevant economic gains. The result of these modifications give better operating system characteristics. A system of electronic circuits have been developed, utilising the latest technology, to replace the pegboard with the associated obsolete electrical components. Software for the system includes control programmes for the implementation of the original pegboard operation and several sample machine code programmes are included, covering a wide spectrum of applications, including diagnostic testing of the control system. It is concluded that it is possible to carry out a low cost retrofit on existing machine tools to enhance their range of capabilities.
Resumo:
This study is concerned with several proposals concerning multiprocessor systems and with the various possible methods of evaluating such proposals. After a discussion of the advantages and disadvantages of several performance evaluation tools, the author decides that simulation is the only tool powerful enough to develop a model which would be of practical use, in the design, comparison and extension of systems. The main aims of the simulation package developed as part of this study are cost effectiveness, ease of use and generality. The methodology on which the simulation package is based is described in detail. The fundamental principles are that model design should reflect actual systems design, that measuring procedures should be carried out alongside design that models should be well documented and easily adaptable and that models should be dynamic. The simulation package itself is modular, and in this way reflects current design trends. This approach also aids documentation and ensures that the model is easily adaptable. It contains a skeleton structure and a library of segments which can be added to or directly swapped with segments of the skeleton structure, to form a model which fits a user's requirements. The study also contains the results of some experimental work carried out using the model, the first part of which tests• the model's capabilities by simulating a large operating system, the ICL George 3 system; the second part deals with general questions and some of the many proposals concerning multiprocessor systems.
Resumo:
Many planning and control tools, especially network analysis, have been developed in the last four decades. The majority of them were created in military organization to solve the problem of planning and controlling research and development projects. The original version of the network model (i.e. C.P.M/PERT) was transplanted to the construction industry without the consideration of the special nature and environment of construction projects. It suited the purpose of setting up targets and defining objectives, but it failed in satisfying the requirement of detailed planning and control at the site level. Several analytical and heuristic rules based methods were designed and combined with the structure of C.P.M. to eliminate its deficiencies. None of them provides a complete solution to the problem of resource, time and cost control. VERT was designed to deal with new ventures. It is suitable for project evaluation at the development stage. CYCLONE, on the other hand, is concerned with the design and micro-analysis of the production process. This work introduces an extensive critical review of the available planning techniques and addresses the problem of planning for site operation and control. Based on the outline of the nature of site control, this research developed a simulation based network model which combines part of the logics of both VERT and CYCLONE. Several new nodes were designed to model the availability and flow of resources, the overhead and operating cost and special nodes for evaluating time and cost. A large software package is written to handle the input, the simulation process and the output of the model. This package is designed to be used on any microcomputer using MS-DOS operating system. Data from real life projects were used to demonstrate the capability of the technique. Finally, a set of conclusions are drawn regarding the features and limitations of the proposed model, and recommendations for future work are outlined at the end of this thesis.
Resumo:
To exploit the popularity of TCP as still the dominant sender and protocol of choice for transporting data reliably across the heterogeneous Internet, this thesis explores end-to-end performance issues and behaviours of TCP senders when transferring data to wireless end-users. The theme throughout is on end-users located specifically within 802.11 WLANs at the edges of the Internet, a largely untapped area of work. To exploit the interests of researchers wanting to study the performance of TCP accurately over heterogeneous conditions, this thesis proposes a flexible wired-to-wireless experimental testbed that better reflects conditions in the real-world. To exploit the transparent functionalities between TCP in the wired domain and the IEEE 802.11 WLAN protocols, this thesis proposes a more accurate methodology for gauging the transmission and error characteristics of real-world 802.11 WLANs. It also aims to correlate any findings with the functionality of fixed TCP senders. To exploit the popularity of Linux as a popular operating system for many of the Internet’s data servers, this thesis studies and evaluates various sender-side TCP congestion control implementations within the recent Linux v2.6. A selection of the implementations are put under systematic testing using real-world wired-to-wireless conditions in order to screen and present a viable candidate/s for further development and usage in the modern-day heterogeneous Internet. Overall, this thesis comprises a set of systematic evaluations of TCP senders over 802.11 WLANs, incorporating measurements in the form of simulations, emulations, and through the use of a real-world-like experimental testbed. The goal of the work is to ensure that all aspects concerned are comprehensively investigated in order to establish rules that can help to decide under which circumstances the deployment of TCP is optimal i.e. a set of paradigms for advancing the state-of-the-art in data transport across the Internet.
Resumo:
A view has emerged within manufacturing and service organizations that the operations management function can hold the key to achieving competitive edge. This has recently been emphasized by the demands for greater variety and higher quality which must be set against a background of increasing cost of resources. As nations' trade barriers are progressively lowered and removed, so producers of goods and service products are becoming more exposed to competition that may come from virtually anywhere around the world. To simply survive in this climate many organizations have found it necessary to improve their manufacturing or service delivery systems. To become real ''winners'' some have adopted a strategic approach to operations and completely reviewed and restructured their approach to production system design and operations planning and control. The articles in this issue of the International journal of Operations & Production Management have been selected to illustrate current thinking and practice in relation to this situation. They are all based on papers presented to the Sixth International Conference of the Operations Management Association-UK which was held at Aston University in June 1991. The theme of the conference was "Achieving Competitive Edge" and authors from 15 countries around the world contributed to more than 80 presented papers. Within this special issue five topic areas are addressed with two articles relating to each. The topics are: strategic management of operations; managing change; production system design; production control; and service operations. Under strategic management of operations De Toni, Filippini and Forza propose a conceptual model which considers the performance of an operating system as a source of competitive advantage through the ''operation value chain'' of design, purchasing, production and distribution. Their model is set within the context of the tendency towards globalization. New's article is somewhat in contrast to the more fashionable literature on operations strategy. It challenges the validity of the current idea of ''world-class manufacturing'' and, instead, urges a reconsideration of the view that strategic ''trade-offs'' are necessary to achieve a competitive edge. The importance of managing change has for some time been recognized within the field of organization studies but its relevance in operations management is now being realized. Berger considers the use of "organization design", ''sociotechnical systems'' and change strategies and contrasts these with the more recent idea of the ''dialogue perspective''. A tentative model is suggested to improve the analysis of different strategies in a situation specific context. Neely and Wilson look at an essential prerequisite if change is to be effected in an efficient way, namely product goal congruence. Using a case study as its basis, their article suggests a method of measuring goal congruence as a means of identifying the extent to which key performance criteria relating to quality, time, cost and flexibility are understood within an organization. The two articles on production systems design represent important contributions to the debate on flexible production organization and autonomous group working. Rosander uses the results from cases to test the applicability of ''flow groups'' as the optimal way of organizing batch production. Schuring also examines cases to determine the reasons behind the adoption of ''autonomous work groups'' in The Netherlands and Sweden. Both these contributions help to provide a greater understanding of the production philosophies which have emerged as alternatives to more conventional systems -------for intermittent and continuous production. The production control articles are both concerned with the concepts of ''push'' and ''pull'' which are the two broad approaches to material planning and control. Hirakawa, Hoshino and Katayama have developed a hybrid model, suitable for multistage manufacturing processes, which combines the benefits of both systems. They discuss the theoretical arguments in support of the system and illustrate its performance with numerical studies. Slack and Correa's concern is with the flexibility characteristics of push and pull material planning and control systems. They use the case of two plants using the different systems to compare their performance within a number of predefined flexibility types. The two final contributions on service operations are complementary. The article by Voss really relates to manufacturing but examines the application of service industry concepts within the UK manufacturing sector. His studies in a number of companies support the idea of the ''service factory'' and offer a new perspective for manufacturing. Harvey's contribution by contrast, is concerned with the application of operations management principles in the delivery of professional services. Using the case of social-service provision in Canada, it demonstrates how concepts such as ''just-in-time'' can be used to improve service performance. The ten articles in this special issue of the journal address a wide range of issues and situations. Their common aspect is that, together, they demonstrate the extent to which competitiveness can be improved via the application of operations management concepts and techniques.
Resumo:
Background: Remote, non-invasive and objective tests that can be used to support expert diagnosis for Parkinson's disease (PD) are lacking. Methods: Participants underwent baseline in-clinic assessments, including the Unified Parkinson's Disease Rating Scale (UPDRS), and were provided smartphones with an Android operating system that contained a smartphone application that assessed voice, posture, gait, finger tapping, and response time. Participants then took the smart phones home to perform the five tasks four times a day for a month. Once a week participants had a remote (telemedicine) visit with a Parkinson disease specialist in which a modified (excluding assessments of rigidity and balance) UPDRS performed. Using statistical analyses of the five tasks recorded using the smartphone from 10 individuals with PD and 10 controls, we sought to: (1) discriminate whether the participant had PD and (2) predict the modified motor portion of the UPDRS. Results: Twenty participants performed an average of 2.7 tests per day (68.9% adherence) for the study duration (average of 34.4 days) in a home and community setting. The analyses of the five tasks differed between those with Parkinson disease and those without. In discriminating participants with PD from controls, the mean sensitivity was 96.2% (SD 2%) and mean specificity was 96.9% (SD 1.9%). The mean error in predicting the modified motor component of the UPDRS (range 11-34) was 1.26 UPDRS points (SD 0.16). Conclusion: Measuring PD symptoms via a smartphone is feasible and has potential value as a diagnostic support tool.
Resumo:
This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.
Resumo:
Mobile WiFi devices are becoming increasingly popular in non-seamless and user-controlled mobile traffic offloading alongside the standard WiFi hotspots. Unlike the operator-controlled hotspots, a mobile WiFi device relies on the capacity of the macro-cell for the data rate allocated to it. This type of devices can help offloading data traffic from the macro-cell base station and serve the end users within a closer range, but will change the pattern of resource distributions operated by the base station. We propose a resource allocation scheme that aims to optimize user quality of experience (QoE) when accessing video services in the environment where traffic offloading is taking place through interworking between a mobile communication system and low range wireless LANs. In this scheme, a rate redistribution algorithm is derived to perform scheduling which is controlled by a no-reference quality assessment metric in order to achieve the desired trade-offs between efficiency and fairness. We show the performance of this algorithm in terms of the distribution of the allocated data rates throughout the macro-cell investigated and the service coverage offered by the WiFi access point.
Resumo:
Investigates the degree of global standardisation of a corporate visual identity system (CVIS) in multinational operations. A special emphasis of this research is accorded to UK companies operating in Malaysia. In particular, the study seeks to reveal the reasons for developing a standardised CVIS; the behavioural issues associated with CVIS; and the determination in selecting a graphic design agency. The findings of the research revealed that multinational corporations in an increasingly corporate environment adopted a standardised CVIS for several reasons, including, aiding the sale of products and services, creating an attractive environment for hiring employees, and increasing the company’s stature and presence. Further findings show that the interest in global identity was stimulated by global restructuring, merger or acquisition. The above trends help explain why increased focus has been accorded to CVIS over the past five years by many UK companies operating in Malaysia. Additional findings reveal that both the UK design agencies and in-house design department are used in the development of the firms’ CVIS.
Resumo:
The objective of this work has been to investigate the principle of combined bioreaction and separation in a simulated counter-current chromatographic bioreactor-separator system (SCCR-S). The SCCR-S system consisted of twelve 5.4cm i.d x 75cm long columns packed with calcium charged cross-linked polystyrene resin. Three bioreactions, namely the saccharification of modified starch to maltose and dextrin using the enzyme maltogenase, the hydrolysis of lactose to galactose and glucose in the presence of the enzyme lactase and the biosynthesis of dextran from sucrose using the enzyme dextransucrase. Combined bioreaction and separation has been successfully carried out in the SCCR-S system for the saccharification of modified starch to maltose and dextrin. The effects of the operating parameters (switch time, eluent flowrate, feed concentration and enzyme activity) on the performance of the SCCR-S system were investigated. By using an eluent of dilute enzyme solution, starch conversions of up to 60% were achieved using lower amounts of enzyme than the theoretical amount required by a conventional bioreactor to produce the same amount of maltose over the same time period. Comparing the SCCR-S system to a continuous annular chromatograph (CRAC) for the saccharification of modified starch showed that the SCCR-S system required only 34.6-47.3% of the amount of enzyme required by the CRAC. The SCCR-S system was operated in the batch and continuous modes as a bioreactor-separator for the hydrolysis of lactose to galactose and glucose. By operating the system in the continuous mode, the operating parameters were further investigated. During these experiments the eluent was deionised water and the enzyme was introduced into the system through the same port as the feed. The galactose produced was retarded and moved with the stationary phase to be purge as the galactose rich product (GalRP) while the glucose moved with the mobile phase and was collected as the glucose rich product (GRP). By operating at up to 30%w/v lactose feed concentrations, complete conversions were achieved using only 48% of the theoretical amount of enzyme required by a conventional bioreactor to hydrolyse the same amount of glucose over the same time period. The main operating parameters affecting the performance of the SCCR-S system operating in the batch mode were investigated and the results compared to those of the continuous operation of the SCCR-S system. . During the biosynthesis of dextran in the SCCR-S system, a method of on-line regeneration of the resin was required to operate the system continuously. Complete conversion was achieved at sucrose feed concentrations of 5%w/v with fructose rich. products (FRP) of up to 100% obtained. The dextran rich products were contaninated by small amounts of glucose and levan formed during the bioreaction. Mathematical modelling and computer simulation of the SCCR-S. system operating in the continuous mode for the hydrolysis of lactose has been carried out. .