893 resultados para Multiple methods framework
Resumo:
Increasing in resolution of numerical weather prediction models has allowed more and more realistic forecasts of atmospheric parameters. Due to the growing variability into predicted fields the traditional verification methods are not always able to describe the model ability because they are based on a grid-point-by-grid-point matching between observation and prediction. Recently, new spatial verification methods have been developed with the aim of show the benefit associated to the high resolution forecast. Nested in among of the MesoVICT international project, the initially aim of this work is to compare the newly tecniques remarking advantages and disadvantages. First of all, the MesoVICT basic examples, represented by synthetic precipitation fields, have been examined. Giving an error evaluation in terms of structure, amplitude and localization of the precipitation fields, the SAL method has been studied more thoroughly respect to the others approaches with its implementation in the core cases of the project. The verification procedure has concerned precipitation fields over central Europe: comparisons between the forecasts performed by the 00z COSMO-2 model and the VERA (Vienna Enhanced Resolution Analysis) have been done. The study of these cases has shown some weaknesses of the methodology examined; in particular has been highlighted the presence of a correlation between the optimal domain size and the extention of the precipitation systems. In order to increase ability of SAL, a subdivision of the original domain in three subdomains has been done and the method has been applied again. Some limits have been found in cases in which at least one of the two domains does not show precipitation. The overall results for the subdomains have been summarized on scatter plots. With the aim to identify systematic errors of the model the variability of the three parameters has been studied for each subdomain.
Resumo:
Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.
Resumo:
A computational framework for enhancing design in an evolutionary approach with a dynamic hierarchical structure is presented in this paper. This framework can be used as an evolutionary kernel for building computer-supported design systems. It provides computational components for generating, adapting and exploring alternative design solutions at multiple levels of abstraction with hierarchically structured design representations. In this paper, preliminary experimental results of using this framework in several design applications are presented.
Resumo:
Community development is increasingly using participatory processes that aim to be inclusive and empowering. However, researchers have found that such processes can have contradictory effects. Australian research has highlighted the significant leadership of rural women in sustainable community and economic development and in the adoption of new communication technologies such as the Internet. A focus on gender in participatory development may therefore lead to more effective programs and policies. This chapter outlines an interdisciplinary feminist framework for critically evaluating the participation and empowerment of rural women. This framework was found effective in evaluating an Australian project that aimed to enhance rural women’s access to communication technologies and to empower its participants. Its multiple theoretical and methodological approaches are outlined. The framework advocates an analysis of diversity and difference and the macro and micro contexts. Some principles and strategies for rural women’s inclusion, participation, empowerment, and for participatory feminist evaluation are outlined.
Resumo:
Ordinary desktop computers continue to obtain ever more resources – in-creased processing power, memory, network speed and bandwidth – yet these resources spend much of their time underutilised. Cycle stealing frameworks harness these resources so they can be used for high-performance computing. Traditionally cycle stealing systems have used client-server based architectures which place significant limits on their ability to scale and the range of applica-tions they can support. By applying a fully decentralised network model to cycle stealing the limits of centralised models can be overcome. Using decentralised networks in this manner presents some difficulties which have not been encountered in their previous uses. Generally decentralised ap-plications do not require any significant fault tolerance guarantees. High-performance computing on the other hand requires very stringent guarantees to ensure correct results are obtained. Unfortunately mechanisms developed for traditional high-performance computing cannot be simply translated because of their reliance on a reliable storage mechanism. In the highly dynamic world of P2P computing this reliable storage is not available. As part of this research a fault tolerance system has been created which provides considerable reliability without the need for a persistent storage. As well as increased scalability, fully decentralised networks offer the ability for volunteers to communicate directly. This ability provides the possibility of supporting applications whose tasks require direct, message passing style communication. Previous cycle stealing systems have only supported embarrassingly parallel applications and applications with limited forms of communication so a new programming model has been developed which can support this style of communication within a cycle stealing context. In this thesis I present a fully decentralised cycle stealing framework. The framework addresses the problems of providing a reliable fault tolerance sys-tem and supporting direct communication between parallel tasks. The thesis includes a programming model for developing cycle stealing applications with direct inter-process communication and methods for optimising object locality on decentralised networks.
Resumo:
Ethnography has gained wide acceptance in the industrial design profession and curriculum as a means of understanding the user. However, there is considerable confusion about the particularities of its practice accompanied by the absence of an interoperable vocabulary. The consequent interdisciplinary effort is a power play between disciplines whereby the methodological view of ethnography marginalises its theoretical and analytical components. In doing so, it restricts the potential of ethnography suggesting the need for alternative methods of informing the design process. This article suggests that activity theory, with an emphasis on human activity as the fundamental unit of study, is an appropriate methodology for the generation of user requirements. The process is illustrated through the adaptation of an ethnographic case study, for the design of classroom furniture in India.
Resumo:
This paper uses dynamic computer simulation techniques to apply a procedure using vibration-based methods for damage assessment in multiple-girder composite bridge. In addition to changes in natural frequencies, this multi-criteria procedure incorporates two methods, namely the modal flexibility and the modal strain energy method. Using the numerically simulated modal data obtained through finite element analysis software, algorithms based on modal flexibility and modal strain energy change before and after damage are obtained and used as the indices for the assessment of structural health state. The feasibility and capability of the approach is demonstrated through numerical studies of proposed structure with six damage scenarios. It is concluded that the modal strain energy method is competent for application on multiple-girder composite bridge, as evidenced through the example treated in this paper.
Resumo:
Background The problem of silent multiple comparisons is one of the most difficult statistical problems faced by scientists. It is a particular problem for investigating a one-off cancer cluster reported to a health department because any one of hundreds, or possibly thousands, of neighbourhoods, schools, or workplaces could have reported a cluster, which could have been for any one of several types of cancer or any one of several time periods. Methods This paper contrasts the frequentist approach with a Bayesian approach for dealing with silent multiple comparisons in the context of a one-off cluster reported to a health department. Two published cluster investigations were re-analysed using the Dunn-Sidak method to adjust frequentist p-values and confidence intervals for silent multiple comparisons. Bayesian methods were based on the Gamma distribution. Results Bayesian analysis with non-informative priors produced results similar to the frequentist analysis, and suggested that both clusters represented a statistical excess. In the frequentist framework, the statistical significance of both clusters was extremely sensitive to the number of silent multiple comparisons, which can only ever be a subjective "guesstimate". The Bayesian approach is also subjective: whether there is an apparent statistical excess depends on the specified prior. Conclusion In cluster investigations, the frequentist approach is just as subjective as the Bayesian approach, but the Bayesian approach is less ambitious in that it treats the analysis as a synthesis of data and personal judgements (possibly poor ones), rather than objective reality. Bayesian analysis is (arguably) a useful tool to support complicated decision-making, because it makes the uncertainty associated with silent multiple comparisons explicit.
Resumo:
This document provides a review of international and national practices in investment decision support tools in road asset management. Efforts were concentrated on identifying analytic frameworks, evaluation methodologies and criteria adopted by current tools. Emphasis was also given to how current approaches support Triple Bottom Line decision-making. Benefit Cost Analysis and Multiple Criteria Analysis are principle methodologies in supporting decision-making in Road Asset Management. The complexity of the applications shows significant differences in international practices. There is continuing discussion amongst practitioners and researchers regarding to which one is more appropriate in supporting decision-making. It is suggested that the two approaches should be regarded as complementary instead of competitive means. Multiple Criteria Analysis may be particularly helpful in early stages of project development, say strategic planning. Benefit Cost Analysis is used most widely for project prioritisation and selecting the final project from amongst a set of alternatives. Benefit Cost Analysis approach is useful tool for investment decision-making from an economic perspective. An extension of the approach, which includes social and environmental externalities, is currently used in supporting Triple Bottom Line decision-making in the road sector. However, efforts should be given to several issues in the applications. First of all, there is a need to reach a degree of commonality on considering social and environmental externalities, which may be achieved by aggregating the best practices. At different decision-making level, the detail of consideration of the externalities should be different. It is intended to develop a generic framework to coordinate the range of existing practices. The standard framework will also be helpful in reducing double counting, which appears in some current practices. Cautions should also be given to the methods of determining the value of social and environmental externalities. A number of methods, such as market price, resource costs and Willingness to Pay, are found in the review. The use of unreasonable monetisation methods in some cases has discredited Benefit Cost Analysis in the eyes of decision makers and the public. Some social externalities, such as employment and regional economic impacts, are generally omitted in current practices. This is due to the lack of information and credible models. It may be appropriate to consider these externalities in qualitative forms in a Multiple Criteria Analysis. Consensus has been reached in considering noise and air pollution in international practices. However, Australia practices generally omitted these externalities. Equity is an important consideration in Road Asset Management. The considerations are either between regions, or social groups, such as income, age, gender, disable, etc. In current practice, there is not a well developed quantitative measure for equity issues. More research is needed to target this issue. Although Multiple Criteria Analysis has been used for decades, there is not a generally accepted framework in the choice of modelling methods and various externalities. The result is that different analysts are unlikely to reach consistent conclusions about a policy measure. In current practices, some favour using methods which are able to prioritise alternatives, such as Goal Programming, Goal Achievement Matrix, Analytic Hierarchy Process. The others just present various impacts to decision-makers to characterise the projects. Weighting and scoring system are critical in most Multiple Criteria Analysis. However, the processes of assessing weights and scores were criticised as highly arbitrary and subjective. It is essential that the process should be as transparent as possible. Obtaining weights and scores by consulting local communities is a common practice, but is likely to result in bias towards local interests. Interactive approach has the advantage in helping decision-makers elaborating their preferences. However, computation burden may result in lose of interests of decision-makers during the solution process of a large-scale problem, say a large state road network. Current practices tend to use cardinal or ordinal scales in measure in non-monetised externalities. Distorted valuations can occur where variables measured in physical units, are converted to scales. For example, decibels of noise converts to a scale of -4 to +4 with a linear transformation, the difference between 3 and 4 represents a far greater increase in discomfort to people than the increase from 0 to 1. It is suggested to assign different weights to individual score. Due to overlapped goals, the problem of double counting also appears in some of Multiple Criteria Analysis. The situation can be improved by carefully selecting and defining investment goals and criteria. Other issues, such as the treatment of time effect, incorporating risk and uncertainty, have been given scant attention in current practices. This report suggested establishing a common analytic framework to deal with these issues.
Resumo:
Manual calibration of large and dynamic networks of cameras is labour intensive and time consuming. This is a strong motivator for the development of automatic calibration methods. Automatic calibration relies on the ability to find correspondences between multiple views of the same scene. If the cameras are sparsely placed, this can be a very difficult task. This PhD project focuses on the further development of uncalibrated wide baseline matching techniques.
Resumo:
The early stages of the building design process are when the most far reaching decisions are made regarding the configuration of the proposed project. This paper examines methods of providing decision support to building designers across multiple disciplines during the early stage of design. The level of detail supported is at the massing study stage where the basic envelope of the project is being defined. The block outlines on the building envelope are sliced into floors. Within a floor the only spatial divisions supported are the “user” space and the building core. The building core includes vertical transportation systems, emergency egress and vertical duct runs. The current focus of the project described in the paper is multi-storey mixed use office/residential buildings with car parking. This is a common type of building in redevelopment projects within and adjacent to the central business districts of major Australian cities. The key design parameters for system selection across the major systems in multi-storey building projects - architectural, structural, HVAC, vertical transportation, electrical distribution, fire protection, hydraulics and cost – are examined. These have been identified through literature research and discussions with building designers from various disciplines. This information is being encoded in decision support tools. The decision support tools communicate through a shared database to ensure that the relevant information is shared across all of the disciplines. An internal data model has been developed to support the very early design phase and the high level system descriptions required. A mapping to IFC 2x2 has also been defined to ensure that this early information is available at later stages of the design process.
Resumo:
This chapter presents the contextual framework for the second phase of a multi-method, multiple study of the information systems (IS) academic discipline in Australia. The chapter outlines the genesis of a two-phase Australian study, and positions the study as the precursor to a larger Pacific-Asia study. Analysis of existing literature on the state of IS and on relevant theory underpins a series of individual Australian state case studies summarised in this chapter and represented as separate chapters in the book. This chapter outlines the methodological approach employed, with emphasis on the case-study method of the multiple state studies. The process of multiple peer review of the studies is described. Importantly, this chapter summarises and analyses each of the subsequent chapters of this book, emphasising the role of a framework developed to guide much of the data gathering and analysis. This chapter also highlights the process involved in conducting the meta-analysis reported in the final chapter of this book, and summarises some of the main results of the meta-analysis.
Resumo:
Building Information Modelling (BIM) is an information technology [IT] enabled approach to managing design data in the AEC/FM (Architecture, Engineering and Construction/ Facilities Management) industry. BIM enables improved interdisciplinary collaboration across distributed teams, intelligent documentation and information retrieval, greater consistency in building data, better conflict detection and enhanced facilities management. Despite the apparent benefits the adoption of BIM in practice has been slow. Workshops with industry focus groups were conducted to identify the industry needs, concerns and expectations from participants who had implemented BIM or were BIM “ready”. Factors inhibiting BIM adoption include lack of training, low business incentives, perception of lack of rewards, technological concerns, industry fragmentation related to uneven ICT adoption practices, contractual matters and resistance to changing current work practice. Successful BIM usage depends on collective adoption of BIM across the different disciplines and support by the client. The relationship of current work practices to future BIM scenarios was identified as an important strategy as the participants believed that BIM cannot be efficiently used with traditional practices and methods. The key to successful implementation is to explore the extent to which current work practices must change. Currently there is a perception that all work practices and processes must adopt and change for effective usage of BIM. It is acknowledged that new roles and responsibilities are emerging and that different parties will lead BIM on different projects. A contingency based approach to the problem of implementation was taken which relies upon integration of BIM project champion, procurement strategy, team capability analysis, commercial software availability/applicability and phase decision making and event analysis. Organizations need to understand: (a) their own work processes and requirements; (b) the range of BIM applications available in the market and their capabilities (c) the potential benefits of different BIM applications and their roles in different phases of the project lifecycle, and (d) collective supply chain adoption capabilities. A framework is proposed to support organizations selection of BIM usage strategies that meet their project requirements. Case studies are being conducted to develop the framework. The results of the preliminary design management case study is presented for contractor led BIM specific to the design and construct procurement strategy.
Resumo:
Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.