610 resultados para scalability
Resumo:
An implementation of a real-time 3D videoconferencing system using the currently available technology is presented. This appr oach is based on the side by side spatial compression of the stereoscopic images . The encoder and the decoder have b een implemented in a standard personal computer and a conventional 3D comp atible TV has been used to present the frames. Moreover, the users without 3D technology can use the system because 2D compatibility mode has been implemented in the decoder. The performance res ults show that a conventional computer can be used for encod ing/decoding audio and video streams and the delay in the transmission is lower than 200 ms.
Resumo:
The paper provides evidence that spatial indexing structures offer faster resolution of Formal Concept Analysis queries than B-Tree/Hash methods. We show that many Formal Concept Analysis operations, computing the contingent and extent sizes as well as listing the matching objects, enjoy improved performance with the use of spatial indexing structures such as the RD-Tree. Speed improvements can vary up to eighty times faster depending on the data and query. The motivation for our study is the application of Formal Concept Analysis to Semantic File Systems. In such applications millions of formal objects must be dealt with. It has been found that spatial indexing also provides an effective indexing technique for more general purpose applications requiring scalability in Formal Concept Analysis systems. The coverage and benchmarking are presented with general applications in mind.
Resumo:
The paper describes two new transport layer (TCP) options and an expanded transport layer queuing strategy that facilitate three functions that are fundamental to the dispatching-based clustered service. A transport layer option has been developed to facilitate. the use of client wait time data within the service request processing of the cluster. A second transport layer option has been developed to facilitate the redirection of service requests by the cluster dispatcher to the cluster processing member. An expanded transport layer service request queuing strategy facilitates the trust based filtering of incoming service requests so that a graceful degradation of service delivery may be achieved during periods of overload - most dramatically evidenced by distributed denial of service attacks against the clustered service. We describe how these new options and queues have been implemented and successfully tested within the transport layer of the Linux kernel.
Resumo:
Most previous studies of university spinouts (USOs) have focused on what determines their formation from the perspectives of the entrepreneurs or of their parent universities. However, few studies have investigated how these entrepreneurial businesses actually grow and how their business models evolve in the process. This paper examines the evolution of USOs' business models over their different development phases. Using empirical evidence gathered from three comprehensive case studies, we explore how USOs' business models evolve over time, and the implications for the financial sustainability and operational scalability of these ventures. This paper extends existing research on the development of USOs, and highlights three themes for future research.
Resumo:
Background aims: The selection of medium and associated reagents for human mesenchymal stromal cell (hMSC) culture forms an integral part of manufacturing process development and must be suitable for multiple process scales and expansion technologies. Methods: In this work, we have expanded BM-hMSCs in fetal bovine serum (FBS)- and human platelet lysate (HPL)-containing media in both a monolayer and a suspension-based microcarrier process. Results: The introduction of HPL into the monolayer process increased the BM-hMSC growth rate at the first experimental passage by 0.049 day and 0.127/day for the two BM-hMSC donors compared with the FBS-based monolayer process. This increase in growth rate in HPL-containing medium was associated with an increase in the inter-donor consistency, with an inter-donor range of 0.406 cumulative population doublings after 18 days compared with 2.013 in FBS-containing medium. Identity and quality characteristics of the BM-hMSCs are also comparable between conditions in terms of colony-forming potential, osteogenic potential and expression of key genes during monolayer and post-harvest from microcarrier expansion. BM-hMSCs cultured on microcarriers in HPL-containing medium demonstrated a reduction in the initial lag phase for both BM-hMSC donors and an increased BM-hMSC yield after 6 days of culture to 1.20 ± 0.17 × 105 and 1.02 ± 0.005 × 105 cells/mL compared with 0.79 ± 0.05 × 105 and 0.36 ± 0.04 × 105 cells/mL in FBS-containing medium. Conclusions: This study has demonstrated that HPL, compared with FBS-containing medium, delivers increased growth and comparability across two BM-hMSC donors between monolayer and microcarrier culture, which will have key implications for process transfer during scale-up.
Resumo:
Ordinary desktop computers continue to obtain ever more resources – in-creased processing power, memory, network speed and bandwidth – yet these resources spend much of their time underutilised. Cycle stealing frameworks harness these resources so they can be used for high-performance computing. Traditionally cycle stealing systems have used client-server based architectures which place significant limits on their ability to scale and the range of applica-tions they can support. By applying a fully decentralised network model to cycle stealing the limits of centralised models can be overcome. Using decentralised networks in this manner presents some difficulties which have not been encountered in their previous uses. Generally decentralised ap-plications do not require any significant fault tolerance guarantees. High-performance computing on the other hand requires very stringent guarantees to ensure correct results are obtained. Unfortunately mechanisms developed for traditional high-performance computing cannot be simply translated because of their reliance on a reliable storage mechanism. In the highly dynamic world of P2P computing this reliable storage is not available. As part of this research a fault tolerance system has been created which provides considerable reliability without the need for a persistent storage. As well as increased scalability, fully decentralised networks offer the ability for volunteers to communicate directly. This ability provides the possibility of supporting applications whose tasks require direct, message passing style communication. Previous cycle stealing systems have only supported embarrassingly parallel applications and applications with limited forms of communication so a new programming model has been developed which can support this style of communication within a cycle stealing context. In this thesis I present a fully decentralised cycle stealing framework. The framework addresses the problems of providing a reliable fault tolerance sys-tem and supporting direct communication between parallel tasks. The thesis includes a programming model for developing cycle stealing applications with direct inter-process communication and methods for optimising object locality on decentralised networks.
Resumo:
Climate change and human activity are subjecting the environment to unprecedented rates of change. Monitoring these changes is an immense task that demands new levels of automated monitoring and analysis. We propose the use of acoustics as a proxy for the time consuming auditing of fauna, especially for determining the presence/absence of species. Acoustic monitoring is deceptively simple; seemingly all that is required is a sound recorder. However there are many major challenges if acoustics are to be used for large scale monitoring of ecosystems. Key issues are scalability and automation. This paper discusses our approach to this important research problem. Our work is being undertaken in collaboration with ecologists interested both in identifying particular species and in general ecosystem health.
Resumo:
The requirement to monitor the rapid pace of environmental change due to global warming and to human development is producing large volumes of data but placing much stress on the capacity of ecologists to store, analyse and visualise that data. To date, much of the data has been provided by low level sensors monitoring soil moisture, dissolved nutrients, light intensity, gas composition and the like. However, a significant part of an ecologist’s work is to obtain information about species diversity, distributions and relationships. This task typically requires the physical presence of an ecologist in the field, listening and watching for species of interest. It is an extremely difficult task to automate because of the higher order difficulties in bandwidth, data management and intelligent analysis if one wishes to emulate the highly trained eyes and ears of an ecologist. This paper is concerned with just one part of the bigger challenge of environmental monitoring – the acquisition and analysis of acoustic recordings of the environment. Our intention is to provide helpful tools to ecologists – tools that apply information technologies and computational technologies to all aspects of the acoustic environment. The on-line system which we are building in conjunction with ecologists offers an integrated approach to recording, data management and analysis. The ecologists we work with have different requirements and therefore we have adopted the toolbox approach, that is, we offer a number of different web services that can be concatenated according to need. In particular, one group of ecologists is concerned with identifying the presence or absence of species and their distributions in time and space. Another group, motivated by legislative requirements for measuring habitat condition, are interested in summary indices of environmental health. In both case, the key issues are scalability and automation.
Resumo:
In the filed of semantic grid, QoS-based Web service scheduling for workflow optimization is an important problem.However, in semantic and service rich environment like semantic grid, the emergence of context constraints on Web services is very common making the scheduling consider not only quality properties of Web services, but also inter service dependencies which are formed due to the context constraints imposed on Web services. In this paper, we present a repair genetic algorithm, namely minimal-conflict hill-climbing repair genetic algorithm, to address scheduling optimization problems in workflow applications in the presence of domain constraints and inter service dependencies. Experimental results demonstrate the scalability and effectiveness of the genetic algorithm.
Resumo:
In Web service based systems, new value-added Web services can be constructed by integrating existing Web services. A Web service may have many implementations, which are functionally identical, but have different Quality of Service (QoS) attributes, such as response time, price, reputation, reliability, availability and so on. Thus, a significant research problem in Web service composition is how to select an implementation for each of the component Web services so that the overall QoS of the composite Web service is optimal. This is so called QoS-aware Web service composition problem. In some composite Web services there are some dependencies and conflicts between the Web service implementations. However, existing approaches cannot handle the constraints. This paper tackles the QoS-aware Web service composition problem with inter service dependencies and conflicts using a penalty-based genetic algorithm (GA). Experimental results demonstrate the effectiveness and the scalability of the penalty-based GA.
Resumo:
Purpose – The paper aims to describe a workforce-planning model developed in-house in an Australian university library that is based on rigorous environmental scanning of an institution, the profession and the sector. Design/methodology/approach – The paper uses a case study that describes the stages of the planning process undertaken to develop the Library’s Workforce Plan and the documentation produced. Findings – While it has been found that the process has had successful and productive outcomes, workforce planning is an ongoing process. To remain effective, the workforce plan needs to be reviewed annually in the context of the library’s overall planning program. This is imperative if the plan is to remain current and to be regarded as a living document that will continue to guide library practice. Research limitations/implications – Although a single case study, the work has been contextualized within the wider research into workforce planning. Practical implications – The paper provides a model that can easily be deployed within a library without external or specialist consultant skills, and due to its scalability can be applied at department or wider level. Originality/value – The paper identifies the trends impacting on, and the emerging opportunities for, university libraries and provides a model for workforce planning that recognizes the context and culture of the organization as key drivers in determining workforce planning. Keywords - Australia, University libraries, Academic libraries, Change management, Manpower planning Paper type - Case study
Resumo:
Scalable video coding of H.264/AVC standard enables adaptive and flexible delivery for multiple devices and various network conditions. Only a few works have addressed the influence of different scalability parameters (frame rate, spatial resolution, and SNR) on the user perceived quality within a limited scope. In this paper, we have conducted an experiment of subjective quality assessment for video sequences encoded with H.264/SVC to gain a better understanding of the correlation between video content and UPQ at all scalable layers and the impact of rate-distortion method and different scalabilities on bitrate and UPQ. Findings from this experiment will contribute to a user-centered design of adaptive delivery of scalable video stream.
Resumo:
In the field of semantic grid, QoS-based Web service composition is an important problem. In semantic and service rich environment like semantic grid, the emergence of context constraints on Web services is very common making the composition consider not only QoS properties of Web services, but also inter service dependencies and conflicts which are formed due to the context constraints imposed on Web services. In this paper, we present a repair genetic algorithm, namely minimal-conflict hill-climbing repair genetic algorithm, to address the Web service composition optimization problem in the presence of domain constraints and inter service dependencies and conflicts. Experimental results demonstrate the scalability and effectiveness of the genetic algorithm.