18 resultados para Distributed operating systems (Computers) - Design
em University of Queensland eSpace - Australia
Resumo:
The paper describes two new transport layer (TCP) options and an expanded transport layer queuing strategy that facilitate three functions that are fundamental to the dispatching-based clustered service. A transport layer option has been developed to facilitate. the use of client wait time data within the service request processing of the cluster. A second transport layer option has been developed to facilitate the redirection of service requests by the cluster dispatcher to the cluster processing member. An expanded transport layer service request queuing strategy facilitates the trust based filtering of incoming service requests so that a graceful degradation of service delivery may be achieved during periods of overload - most dramatically evidenced by distributed denial of service attacks against the clustered service. We describe how these new options and queues have been implemented and successfully tested within the transport layer of the Linux kernel.
Resumo:
Much research has been devoted over the years to investigating and advancing the techniques and tools used by analysts when they model. As opposed to what academics, software providers and their resellers promote as should be happening, the aim of this research was to determine whether practitioners still embraced conceptual modeling seriously. In addition, what are the most popular techniques and tools used for conceptual modeling? What are the major purposes for which conceptual modeling is used? The study found that the top six most frequently used modeling techniques and methods were ER diagramming, data flow diagramming, systems flowcharting, workflow modeling, UML, and structured charts. Modeling technique use was found to decrease significantly from smaller to medium-sized organizations, but then to increase significantly in larger organizations (proxying for large, complex projects). Technique use was also found to significantly follow an inverted U-shaped curve, contrary to some prior explanations. Additionally, an important contribution of this study was the identification of the factors that uniquely influence the decision of analysts to continue to use modeling, viz., communication (using diagrams) to/from stakeholders, internal knowledge (lack of) of techniques, user expectations management, understanding models' integration into the business, and tool/software deficiencies. The highest ranked purposes for which modeling was undertaken were database design and management, business process documentation, business process improvement, and software development. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Effectively using heterogeneous, distributed information has attracted much research in recent years. Current web services technologies have been used successfully in some non data intensive distributed prototype systems. However, most of them can not work well in data intensive environment. This paper provides an infrastructure layer in data intensive environment for the effectively providing spatial information services by using the web services over the Internet. We extensively investigate and analyze the overhead of web services in data intensive environment, and propose some new optimization techniques which can greatly increase the system’s efficiency. Our experiments show that these techniques are suitable to data intensive environment. Finally, we present the requirement of these techniques for the information of web services over the Internet.
Resumo:
The Environmental Sciences Division within Queensland Environmental Protection Agency works to monitor, assess and model the condition of the environment. The Division has as a legislative responsibility to produce a whole-of-government report every four years dealing environmental conditions and trends in a ”State of the Environment report” (SoE)[1][2][3]. State of Environment Web Service Reporting System is a supplementary web service based SoE reporting tool, which aims to deliver accurate, timely and accessible information on the condition of the environment through web services via Internet [4][5]. This prototype provides a scientific assessment of environmental conditions for a set of environmental indicators. It contains text descriptions and tables, charts and maps with spatiotemporal dimensions to show the impact of certain environmental indicators on our environment. This prototype is a template based indicator system, to which the administrator may add new sql queries for new indicator services without changing the architecture and codes of this template. The benefits are brought through a service-oriented architecture which provides an online query service with seamless integration. In addition, since it uses web service architecture, each individual component within the application can be implemented by using different programming languages and in different operating systems. Although the services showed in this demo are built upon two datasets of regional ecosystem and protection area of Queensland, it will be possible to report on the condition of water, air, land, coastal zones, energy resources, biodiversity, human settlements and natural culture heritage on the fly as well. Figure 1 shows the architecture of the prototype. In the next section, I will discuss the research tasks in the prototype.
Resumo:
A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
One of the obstacles to improved security of the Internet is ad hoc development of technologies with different design goals and different security goals. This paper proposes reconceptualizing the Internet as a secure distributed system, focusing specifically on the application layer. The notion is to redesign specific functionality, based on principles discovered in research on distributed systems in the decades since the initial development of the Internet. Because of the problems in retrofitting new technology across millions of clients and servers, any options with prospects of success must support backward compatibility. This paper outlines a possible new architecture for internet-based mail which would replace existing protocols by a more secure framework. To maintain backward compatibility, initial implementation could offer a web browser-based front end but the longer-term approach would be to implement the system using appropriate models of replication. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, a new v-metric based approach is proposed to design decentralized controllers for multi-unit nonlinear plants that admit a set of plant decompositions in an operating space. Similar to the gap metric approach in literature, it is shown that the operating space can also be divided into several subregions based on a v-metric indicator, and each of the subregions admits the same controller structure. A comparative case study is presented to display the advantages of proposed approach over the gap metric approach. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
The simultaneous design of the steady-state and dynamic performance of a process has the ability to satisfy much more demanding dynamic performance criteria than the design of dynamics only by the connection of a control system. A method for designing process dynamics based on the use of a linearised systems' eigenvalues has been developed. The eigenvalues are associated with system states using the unit perturbation spectral resolution (UPSR), characterising the dynamics of each state. The design method uses a homotopy approach to determine a final design which satisfies both steady-state and dynamic performance criteria. A highly interacting single stage forced circulation evaporator system, including control loops, was designed by this method with the goal of reducing the time taken for the liquid composition to reach steady-state. Initially the system was successfully redesigned to speed up the eigenvalue associated with the liquid composition state, but this did not result in an improved startup performance. Further analysis showed that the integral action of the composition controller was the source of the limiting eigenvalue. Design changes made to speed up this eigenvalue did result in an improved startup performance. The proposed approach provides a structured way to address the design-control interface, giving significant insight into the dynamic behaviour of the system such that a systematic design or redesign of an existing system can be undertaken with confidence.
Resumo:
In this paper we propose a new framework for evaluating designs based on work domain analysis, the first phase of cognitive work analysis. We develop a rationale for a new approach to evaluation by describing the unique characteristics of complex systems and by showing that systems engineering techniques only partially accommodate these characteristics. We then present work domain analysis as a complementary framework for evaluation. We explain this technique by example by showing how the Australian Defence Force used work domain analysis to evaluate design proposals for a new system called Airborne Early Warning and Control. This case study also demonstrates that work domain analysis is a useful and feasible approach that complements standard techniques for evaluation and that promotes a central role for human factors professionals early in the system design and development process. Actual or potential applications of this research include the evaluation of designs for complex systems.