191 resultados para incremental computation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Policy decisions are frequently influenced by more than research results alone. This review examines one road safety countermeasure, graduated driver licensing, in three jurisdictions and identifies how the conflict between mobility and safety goals can influence policy decisions relating to this countermeasure. Evaluations from around the world of graduated driver licensing have demonstrated clear reductions in crashes for young drivers. However, the introduction of this countermeasure may be affected, both positively and negatively, by the conflict some policy makers experience between ensuring individuals remain both mobile and safe as drivers. This review highlights how this conflict in policy decision making can serve to either facilitate or hinder the introduction of graduated driver licensing systems. However, policy makers whose focus on mobility is too strong when compared with safety may be mistaken, with evidence suggesting that after a graduated driver licensing system is introduced young drivers adapt their behaviour to the new system and remain mobile. As a result, policy makers should consciously acknowledge the conflict between mobility and safety and consider an appropriate balance in order to introduce these systems. Improvements to the licensing system can then be made in an incremental manner as the balance between these two priorities change. Policy makers can achieve an appropriate balance by using empirical evidence as a basis for their decisions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The purpose of this study is to investigate how collaborative relationships enhance continuous innovation in the supply chain using case studies. Design/methodology/approach – The data were collected from semi-structured interviews with 23 managers in ten case studies. The main intention was to comprehend how these firms engaged in collaborative relationships and their importance for successful innovation. The study adopted a qualitative approach to investigating these factors. Findings – The findings demonstrate how differing relationships can impact on the operation of firms and their capacities to innovate. The ability to work together with partners has enabled firms to integrate and link operations for increased effectiveness as well as embark on both radical and incremental innovation. Research limitations/implications – The research into the initiatives and strategies for collaboration was essentially exploratory. A qualitative approach using case studies acknowledged that the responses from managers were difficult to quantify or gauge the extent of these factors. Practical implications – The findings have shown various methods where firms integrated with customers and suppliers in the supply chain. This was evident in the views of managers across all the firms examined, supporting the importance of collaboration and efficient allocation of resources throughout the supply chain. They were able to set procedures in their dealings with partners, sharing knowledge and processes, and subsequently joint-planning and investing with them for better operations, systems and processes in the supply chain. Originality/value – The case studies serve as examples for managers in logistics organisation who are contemplating strategies and issues on collaborative relationships. The study provides important lessons on how such relationships can impact on the operation of firms and their capability to innovate. Keywords Supply chain management, Innovation, Relationship marketing

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Street Computing workshop, held in conjunction with OZCHI 2009, solicits papers discussing new research directions, early research results, works-in-progress and critical surveys of prior research work in the areas of ubiquitous computing and interaction design for urban environments. Urban spaces have unique characteristics. Typically, they are densely populated, buzzing with life twenty-four hours a day, seven days a week. These traits afford many opportunities, but they also present many challenges: traffic jams, smog and pollution, stress placed on public services, and more. Computing technology, particularly the kind that can be placed in the hands of citizens, holds much promise in combating some of these challenges. Yet, computation is not merely a tool for overcoming challenges; rather, when embedded appropriately in our everyday lives, it becomes a tool of opportunity, for shaping how our cities evolve, for enabling us to interact with our city and its people in new ways, and for uncovering useful, but hidden relationships and correlations between elements of the city. The increasing availability of an urban computing infrastructure has lead to new and exciting ways inhabitants can interact with their city. This includes interaction with a wide range of services (e.g. public transport, public services), conceptual representations of the city (e.g. local weather and traffic conditions), the availability of a variety of shared and personal displays (e.g. public, ambient, mobile) and the use of different interaction modes (e.g. tangible, gesture-based, token-based). This workshop solicits papers that address the above themes in some way. We encourage researchers to submit work that deals with challenges and possibilities that the availability of urban computing infrastructure such as sensors and middleware for sensor networks pose. This includes new and innovative ways of interacting with and within urban environments; user experience design and participatory design approaches for urban environments; social aspects of urban computing; and other related areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network induced delay in networked control systems (NCS) is inherently non-uniformly distributed and behaves with multifractal nature. However, such network characteristics have not been well considered in NCS analysis and synthesis. Making use of the information of the statistical distribution of NCS network induced delay, a delay distribution based stochastic model is adopted to link Quality-of-Control and network Quality-of-Service for NCS with uncertainties. From this model together with a tighter bounding technology for cross terms, H∞ NCS analysis is carried out with significantly improved stability results. Furthermore, a memoryless H∞ controller is designed to stabilize the NCS and to achieve the prescribed disturbance attenuation level. Numerical examples are given to demonstrate the effectiveness of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unmanned Aerial Vehicles (UAVs) are emerging as an ideal platform for a wide range of civil applications such as disaster monitoring, atmospheric observation and outback delivery. However, the operation of UAVs is currently restricted to specially segregated regions of airspace outside of the National Airspace System (NAS). Mission Flight Planning (MFP) is an integral part of UAV operation that addresses some of the requirements (such as safety and the rules of the air) of integrating UAVs in the NAS. Automated MFP is a key enabler for a number of UAV operating scenarios as it aids in increasing the level of onboard autonomy. For example, onboard MFP is required to ensure continued conformance with the NAS integration requirements when there is an outage in the communications link. MFP is a motion planning task concerned with finding a path between a designated start waypoint and goal waypoint. This path is described with a sequence of 4 Dimensional (4D) waypoints (three spatial and one time dimension) or equivalently with a sequence of trajectory segments (or tracks). It is necessary to consider the time dimension as the UAV operates in a dynamic environment. Existing methods for generic motion planning, UAV motion planning and general vehicle motion planning cannot adequately address the requirements of MFP. The flight plan needs to optimise for multiple decision objectives including mission safety objectives, the rules of the air and mission efficiency objectives. Online (in-flight) replanning capability is needed as the UAV operates in a large, dynamic and uncertain outdoor environment. This thesis derives a multi-objective 4D search algorithm entitled Multi- Step A* (MSA*) based on the seminal A* search algorithm. MSA* is proven to find the optimal (least cost) path given a variable successor operator (which enables arbitrary track angle and track velocity resolution). Furthermore, it is shown to be of comparable complexity to multi-objective, vector neighbourhood based A* (Vector A*, an extension of A*). A variable successor operator enables the imposition of a multi-resolution lattice structure on the search space (which results in fewer search nodes). Unlike cell decomposition based methods, soundness is guaranteed with multi-resolution MSA*. MSA* is demonstrated through Monte Carlo simulations to be computationally efficient. It is shown that multi-resolution, lattice based MSA* finds paths of equivalent cost (less than 0.5% difference) to Vector A* (the benchmark) in a third of the computation time (on average). This is the first contribution of the research. The second contribution is the discovery of the additive consistency property for planning with multiple decision objectives. Additive consistency ensures that the planner is not biased (which results in a suboptimal path) by ensuring that the cost of traversing a track using one step equals that of traversing the same track using multiple steps. MSA* mitigates uncertainty through online replanning, Multi-Criteria Decision Making (MCDM) and tolerance. Each trajectory segment is modeled with a cell sequence that completely encloses the trajectory segment. The tolerance, measured as the minimum distance between the track and cell boundaries, is the third major contribution. Even though MSA* is demonstrated for UAV MFP, it is extensible to other 4D vehicle motion planning applications. Finally, the research proposes a self-scheduling replanning architecture for MFP. This architecture replicates the decision strategies of human experts to meet the time constraints of online replanning. Based on a feedback loop, the proposed architecture switches between fast, near-optimal planning and optimal planning to minimise the need for hold manoeuvres. The derived MFP framework is original and shown, through extensive verification and validation, to satisfy the requirements of UAV MFP. As MFP is an enabling factor for operation of UAVs in the NAS, the presented work is both original and significant.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are increasing indications that the contribution of holding costs and its impact on housing affordability is very significant. Their importance and perceived high level impact can be gauged from considering the unprecedented level of attention policy makers have given them recently. This may be evidenced by the embedding of specific strategies to address burgeoning holding costs (and particularly those cost savings associated with streamlining regulatory assessment) within statutory instruments such as the Queensland Housing Affordability Strategy, and the South East Queensland Regional Plan. However, several key issues require further investigation. Firstly, the computation and methodology behind the calculation of holding costs varies widely. In fact, it is not only variable, but in some instances completely ignored. Secondly, some ambiguity exists in terms of the inclusion of various elements of holding costs and assessment of their relative contribution. Perhaps this may in part be explained by their nature: such costs are not always immediately apparent. They are not as visible as more tangible cost items associated with greenfield development such as regulatory fees, government taxes, acquisition costs, selling fees, commissions and others. Holding costs are also more difficult to evaluate since for the most part they must be ultimately assessed over time in an ever-changing environment based on their strong relationship with opportunity cost which is in turn dependant, inter alia, upon prevailing inflation and / or interest rates. This paper seeks to provide a more detailed investigation of those elements related to holding costs, and in so doing determine the size of their impact specifically on the end user. It extends research in this area clarifying the extent to which holding costs impact housing affordability. Geographical diversity indicated by the considerable variation between various planning instruments and the length of regulatory assessment periods suggests further research should adopt a case study approach in order to test the relevance of theoretical modelling conducted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The load–frequency control (LFC) problem has been one of the major subjects in a power system. In practice, LFC systems use proportional–integral (PI) controllers. However since these controllers are designed using a linear model, the non-linearities of the system are not accounted for and they are incapable of gaining good dynamical performance for a wide range of operating conditions in a multi-area power system. A strategy for solving this problem because of the distributed nature of a multi-area power system is presented by using a multi-agent reinforcement learning (MARL) approach. It consists of two agents in each power area; the estimator agent provides the area control error (ACE) signal based on the frequency bias estimation and the controller agent uses reinforcement learning to control the power system in which genetic algorithm optimisation is used to tune its parameters. This method does not depend on any knowledge of the system and it admits considerable flexibility in defining the control objective. Also, by finding the ACE signal based on the frequency bias estimation the LFC performance is improved and by using the MARL parallel, computation is realised, leading to a high degree of scalability. Here, to illustrate the accuracy of the proposed approach, a three-area power system example is given with two scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the emergence of multi-cores into the mainstream, there is a growing need for systems to allow programmers and automated systems to reason about data dependencies and inherent parallelismin imperative object-oriented languages. In this paper we exploit the structure of object-oriented programs to abstract computational side-effects. We capture and validate these effects using a static type system. We use these as the basis of sufficient conditions for several different data and task parallelism patterns. We compliment our static type system with a lightweight runtime system to allow for parallelization in the presence of complex data flows. We have a functioning compiler and worked examples to demonstrate the practicality of our solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In rapidly changing environments, organisations require dynamic capabilities to integrate, build and reconfigure resources and competencies to achieve continuous innovation. Although tangible resources are important to promoting the firm’s ability to act, capabilities fundamentally rest in the knowledge created and accumulated by the firm through human capital, organisational routines, processes, practices and norms. The exploration for new ideas, technologies and knowledge – to one side – and – on the other one – the exploitation of existing and new knowledge is essential for continuous innovation. Firms need to decide how best to allocate their scarce resources for both activities and at the same time build dynamic capabilities to keep up with changing market conditions. This in turn, is influenced by the absorptive capacity of the firm to assimilate knowledge. This paper presents a case study that investigates the sources of knowledge in an engineering firm in Australia, and how it is organised and processed. As information pervades the firm from both internal and external sources, individuals integrate knowledge using both exploration and exploitation approaches. The findings illustrate that absorptive capacity can encourage greater leverage for exploration potential leading to radical innovation; and reconfiguring exploitable knowledge for incremental improvements. This study provides an insight for managers in quest of improving knowledge strategies and continuous innovation. It also makes significant theoretical contributions to the literature through extending the concepts of

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for shorter time-to-market, higher quality and reduced cost, model-driven software development has emerged as a promising approach to software engineering. The central idea is to promote models to first-class citizens in the development process. Starting from a set of very abstract models in the early stage of the development, they are refined into more concrete models and finally, as a last step, into code. As early phases of development focus on different concepts compared to later stages, various modelling languages are employed to most accurately capture the concepts and relations under discussion. In light of this refinement process, translating between modelling languages becomes a time-consuming and error-prone necessity. This is remedied by model transformations providing support for reusing and automating recurring translation efforts. These transformations typically can only be used to translate a source model into a target model, but not vice versa. This poses a problem if the target model is subject to change. In this case the models get out of sync and therefore do not constitute a coherent description of the software system anymore, leading to erroneous results in later stages. This is a serious threat to the promised benefits of quality, cost-saving, and time-to-market. Therefore, providing a means to restore synchronisation after changes to models is crucial if the model-driven vision is to be realised. This process of reflecting changes made to a target model back to the source model is commonly known as Round-Trip Engineering (RTE). While there are a number of approaches to this problem, they impose restrictions on the nature of the model transformation. Typically, in order for a transformation to be reversed, for every change to the target model there must be exactly one change to the source model. While this makes synchronisation relatively “easy”, it is ill-suited for many practically relevant transformations as they do not have this one-to-one character. To overcome these issues and to provide a more general approach to RTE, this thesis puts forward an approach in two stages. First, a formal understanding of model synchronisation on the basis of non-injective transformations (where a number of different source models can correspond to the same target model) is established. Second, detailed techniques are devised that allow the implementation of this understanding of synchronisation. A formal underpinning for these techniques is drawn from abductive logic reasoning, which allows the inference of explanations from an observation in the context of a background theory. As non-injective transformations are the subject of this research, there might be a number of changes to the source model that all equally reflect a certain target model change. To help guide the procedure in finding “good” source changes, model metrics and heuristics are investigated. Combining abductive reasoning with best-first search and a “suitable” heuristic enables efficient computation of a number of “good” source changes. With this procedure Round-Trip Engineering of non-injective transformations can be supported.