469 resultados para A-optimality
Resumo:
In Australia, railway systems play a vital role in transporting the sugarcane crop from farms to mills. The sugarcane transport system is very complex and uses daily schedules, consisting of a set of locomotives runs, to satisfy the requirements of the mill and harvesters. The total cost of sugarcane transport operations is very high; over 35% of the total cost of sugarcane production in Australia is incurred in cane transport. Efficient schedules for sugarcane transport can reduce the cost and limit the negative effects that this system can have on the raw sugar production system. There are several benefits to formulating the train scheduling problem as a blocking parallel-machine job shop scheduling (BPMJSS) problem, namely to prevent two trains passing in one section at the same time; to keep the train activities (operations) in sequence during each run (trip) by applying precedence constraints; to pass the trains on one section in the correct order (priorities of passing trains) by applying disjunctive constraints; and, to ease passing trains by solving rail conflicts by applying blocking constraints and Parallel Machine Scheduling. Therefore, the sugarcane rail operations are formulated as BPMJSS problem. A mixed integer programming and constraint programming approaches are used to describe the BPMJSS problem. The model is solved by the integration of constraint programming, mixed integer programming and search techniques. The optimality performance is tested by Optimization Programming Language (OPL) and CPLEX software on small and large size instances based on specific criteria. A real life problem is used to verify and validate the approach. Constructive heuristics and new metaheuristics including simulated annealing and tabu search are proposed to solve this complex and NP-hard scheduling problem and produce a more efficient scheduling system. Innovative hybrid and hyper metaheuristic techniques are developed and coded using C# language to improve the solutions quality and CPU time. Hybrid techniques depend on integrating heuristic and metaheuristic techniques consecutively, while hyper techniques are the complete integration between different metaheuristic techniques, heuristic techniques, or both.
Resumo:
Purpose This thesis is about liveability, place and ageing in the high density urban landscape of Brisbane, Australia. As with other major developed cities around the globe, Brisbane has adopted policies to increase urban residential densities to meet the main liveability and sustainability aim of decreasing car dependence and therefore pollution, as well as to minimise the loss of greenfield areas and habitats to developers. This objective hinges on urban neighbourhoods/communities being liveable places, which residents do not have to leave for everyday living. Community/neighbourhood liveability is an essential ingredient in healthy ageing in place and has a substantial impact upon the safety, independence and well-being of older adults. It is generally accepted that ageing in place is optimal for both older people and the state. The optimality of ageing in place generally assumes that there is a particular quality to environments or standard of liveability in which people successfully age in place. The aim of this thesis was to examine if there are particular environmental qualities or aspects of liveability that test optimality and to better understand the key liveability factors that contribute to successful ageing in place. Method A strength of this thesis is that it draws on two separate studies to address the research question of what makes high density liveable for older people. In Chapter 3, the two methods are identified and differentiated as Method 1 (used in Paper 1) and Method 2 (used in Papers 2, 3, 4 and 5). Method 1 involved qualitative interviews with 24 inner city high density Brisbane residents. The major strength of this thesis is the innovative methodology outlined in the thesis as Method 2. Method 2 involved a case study approach employing qualitative and quantitative methods. Qualitative data was collected using semi-structured, in-depth interviews and time-use diaries completed by participants during the week of tracking. The quantitative data was gathered using Global Positioning Systems for tracking and Geographical Information Systems for mapping and analysis of participants’ activities. The combination of quantitative and qualitative analysis captured both participants’ subjective perceptions of their neighbourhoods and their patterns of movement. This enhanced understanding of how neighbourhoods and communities function and of the various liveability dimensions that contribute to active ageing and ageing in place for older people living in high density environments. Both studies’ participants were inner-city high density residents of Brisbane. The study based on Method 1 drew on a wider age demographic than the study based on Method 2. Findings The five papers presented in this thesis by publication indicate a complex inter-relationship of the factors that make a place liveable. The first three papers identify what is comparable and different between the physical and social factors of high density communities/neighbourhoods. The last two papers explore relationships between social engagement and broader community variables such as infrastructure and the physical built environments that are risk or protective factors relevant to community liveability, active ageing and ageing in place in high density. The research highlights the importance of creating and/or maintaining a barrier-free environment and liveable community for ageing adults. Together, the papers promote liveability, social engagement and active ageing in high density neighbourhoods by identifying factors that constitute liveability and strategies that foster active ageing and ageing in place, social connections and well-being. Recommendations There is a strong need to offer more support for active ageing and ageing in place. While the data analyses of this research provide insight into the lived experience of high density residents, further research is warranted. Further qualitative and quantitative research is needed to explore in more depth, the urban experience and opinions of older people living in urban environments. In particular, more empirical research and theory-building is needed in order to expand understanding of the particular environmental qualities that enable successful ageing in place in our cities and to guide efforts aimed at meeting this objective. The results suggest that encouraging the presence of more inner city retail outlets, particularly services that are utilised frequently in people’s daily lives such as supermarkets, medical services and pharmacies, would potentially help ensure residents fully engage in their local community. The connectivity of streets, footpaths and their role in facilitating the reaching of destinations are well understood as an important dimension of liveability. To encourage uptake of sustainable transport, the built environment must provide easy, accessible connections between buildings, walkways, cycle paths and public transport nodes. Wider streets, given that they take more time to cross than narrow streets, tend to .compromise safety - especially for older people. Similarly, the width of footpaths, the level of buffering, the presence of trees, lighting, seating and design of and distance between pedestrian crossings significantly affects the pedestrian experience for older people and impacts upon their choice of transportation. High density neighbourhoods also require greater levels of street fixtures and furniture for everyday life to make places more useable and comfortable for regular use. The importance of making the public realm useful and habitable for older people cannot be over-emphasised. Originality/value While older people are attracted to high density settings, there has been little empirical evidence linking liveability satisfaction with older people’s use of urban neighbourhoods. The current study examined the relationships between community/neighbourhood liveability, place and ageing to better understand the implications for those adults who age in place. The five papers presented in this thesis add to the understanding of what high density liveable age-friendly communities/ neighbourhoods are and what makes them so for older Australians. Neighbourhood liveability for older people is about being able to age in place and remain active. Issues of ageing in Australia and other areas of the developed world will become more critical in the coming decades. Creating livable communities for all ages calls for partnerships across all levels of government agencies and among different sectors within communities. The increasing percentage of older people in the community will have increasing political influence and it will be a foolish government who ignores the needs of an older society.
Resumo:
A priority when designing control strategies for autonomous underwater vehicles is to emphasize their cost of implementation on a real vehicle and at the same time to minimize a prescribed criterion such as time, energy, payload or combination of those. Indeed, the major issue is that due to the vehicles' design and the actuation modes usually under consideration for underwater platforms the number of actuator switchings must be kept to a small value to ensure feasibility and precision. This constraint is typically not verified by optimal trajectories which might not even be piecewise constants. Our goal is to provide a feasible trajectory that minimizes the number of switchings while maintaining some qualities of the desired trajectory, such as optimality with respect to a given criterion. The one-sided Lipschitz constant is used to derive theoretical estimates. The theory is illustrated on two examples, one is a fully actuated underwater vehicle capable of motion in six degrees-of-freedom and one is minimally actuated with control motions constrained to the vertical plane.
Resumo:
We consider a joint relay selection and subcarrier allocation problem that minimizes the total system power for a multi-user, multi-relay and single source cooperative OFDM based two hop system. The system is constrained to all users having a specific subcarrier requirement (user fairness). However no specific fairness constraints for relays are considered. To ensure the optimum power allocation, the subcarriers in two hops are paired with each other. We obtain an optimal subcarrier allocation for the single user case using a similar method to what is described in [1] and modify the algorithm for multiuser scenario. Although the optimality is not achieved in multiuser case the probability of all users being served fairly is improved significantly with a relatively low cost trade off.
Resumo:
This study compared the performance of a local and three robust optimality criteria in terms of the standard error for a one-parameter and a two-parameter nonlinear model with uncertainty in the parameter values. The designs were also compared in conditions where there was misspecification in the prior parameter distribution. The impact of different correlation between parameters on the optimal design was examined in the two-parameter model. The designs and standard errors were solved analytically whenever possible and numerically otherwise.
Resumo:
This paper considers the problem of reconstructing the motion of a 3D articulated tree from 2D point correspondences subject to some temporal prior. Hitherto, smooth motion has been encouraged using a trajectory basis, yielding a hard combinatorial problem with time complexity growing exponentially in the number of frames. Branch and bound strategies have previously attempted to curb this complexity whilst maintaining global optimality. However, they provide no guarantee of being more efficient than exhaustive search. Inspired by recent work which reconstructs general trajectories using compact high-pass filters, we develop a dynamic programming approach which scales linearly in the number of frames, leveraging the intrinsically local nature of filter interactions. Extension to affine projection enables reconstruction without estimating cameras.
Resumo:
Most existing research on maintenance optimisation for multi-component systems only considers the lifetime distribution of the components. When the condition-based maintenance (CBM) strategy is adopted for multi-component systems, the strategy structure becomes complex due to the large number of component states and their combinations. Consequently, some predetermined maintenance strategy structures are often assumed before the maintenance optimisation of a multi-component system in a CBM context. Developing these predetermined strategy structure needs expert experience and the optimality of these strategies is often not proofed. This paper proposed a maintenance optimisation method that does not require any predetermined strategy structure for a two-component series system. The proposed method is developed based on the semi-Markov decision process (SMDP). A simulation study shows that the proposed method can identify the optimal maintenance strategy adaptively for different maintenance costs and parameters of degradation processes. The optimal maintenance strategy structure is also investigated in the simulation study, which provides reference for further research in maintenance optimisation of multi-component systems.
Resumo:
Phylogenetic inference from sequences can be misled by both sampling (stochastic) error and systematic error (nonhistorical signals where reality differs from our simplified models). A recent study of eight yeast species using 106 concatenated genes from complete genomes showed that even small internal edges of a tree received 100% bootstrap support. This effective negation of stochastic error from large data sets is important, but longer sequences exacerbate the potential for biases (systematic error) to be positively misleading. Indeed, when we analyzed the same data set using minimum evolution optimality criteria, an alternative tree received 100% bootstrap support. We identified a compositional bias as responsible for this inconsistency and showed that it is reduced effectively by coding the nucleotides as purines and pyrimidines (RY-coding), reinforcing the original tree. Thus, a comprehensive exploration of potential systematic biases is still required, even though genome-scale data sets greatly reduce sampling error.
Resumo:
We study a political economy model which aims to understand the diversity in the growth and technology-adoption experiences in different economies. In this model the cost of technology adoption is endogenous and varies across heterogeneous agents. Agents in the model vote on the proportion of revenues allocated towards such expenditures. In the early stages of development, the political-economy outcome of the model ensures that a sub-optimal proportion of government revenue is used to finance adoption-cost reducing expenditures. This sub-optimality is due to the presence of inequality; agents at the lower end of the distribution favor a larger amount of revenue allocated towards redistribution in the form of lump-sum transfers. Eventually all individuals make the switch to the better technology and their incomes converge. The outcomes of the model therefore explain why public choice is more likely to be conservative in nature; it represents the majority choice given conflicting preferences among agents. Consequently, the transition path towards growth and technology adoption varies across countries depending on initial levels of inequality.
Resumo:
Originally developed in bioinformatics, sequence analysis is being increasingly used in social sciences for the study of life-course processes. The methodology generally employed consists in computing dissimilarities between the trajectories and, if typologies are sought, in clustering the trajectories according to their similarities or dissemblances. The choice of an appropriate dissimilarity measure is a major issue when dealing with sequence analysis for life sequences. Several dissimilarities are available in the literature, but neither of them succeeds to become indisputable. In this paper, instead of deciding upon one dissimilarity measure, we propose to use an optimal convex combination of different dissimilarities. The optimality is automatically determined by the clustering procedure and is defined with respect to the within-class variance.
Resumo:
Flow induced shear stress plays an important role in regulating cell growth and distribution in scaffolds. This study sought to correlate wall shear stress and chondrocytes activity for engineering design of micro-porous osteochondral grafts based on the hypothesis that it is possible to capture and discriminate between the transmitted force and cell response at the inner irregularities. Unlike common tissue engineering therapies with perfusion bioreactors in which flow-mediated stress is the controlling parameter, this work assigned the associated stress as a function of porosity to influence in vitro proliferation of chondrocytes. D-optimality criterion was used to accommodate three pore characteristics for appraisal in a mixed level fractional design of experiment (DOE); namely, pore size (4 levels), distribution pattern (2 levels) and density (3 levels). Micro-porous scaffolds (n=12) were fabricated according to the DOE using rapid prototyping of an acrylic-based bio-photopolymer. Computational fluid dynamics (CFD) models were created correspondingly and used on an idealized boundary condition with a Newtonian fluid domain to simulate the dynamic microenvironment inside the pores. In vitro condition was reproduced for the 3D printed constructs seeded by high pellet densities of human chondrocytes and cultured for 72 hours. The results showed that cell proliferation was significantly different in the constructs (p<0.05). Inlet fluid velocity of 3×10-2mms-1 and average shear stress of 5.65×10-2 Pa corresponded with increased cell proliferation for scaffolds with smaller pores in hexagonal pattern and lower densities. Although the analytical solution of a Poiseuille flow inside the pores was found insufficient for the description of the flow profile probably due to the outside flow induced turbulence, it showed that the shear stress would increase with cell growth and decrease with pore size. This correlation demonstrated the basis for determining the relation between the induced stress and chondrocyte activity to optimize microfabrication of engineered cartilaginous constructs.
Resumo:
In this thesis we investigate the use of quantum probability theory for ranking documents. Quantum probability theory is used to estimate the probability of relevance of a document given a user's query. We posit that quantum probability theory can lead to a better estimation of the probability of a document being relevant to a user's query than the common approach, i. e. the Probability Ranking Principle (PRP), which is based upon Kolmogorovian probability theory. Following our hypothesis, we formulate an analogy between the document retrieval scenario and a physical scenario, that of the double slit experiment. Through the analogy, we propose a novel ranking approach, the quantum probability ranking principle (qPRP). Key to our proposal is the presence of quantum interference. Mathematically, this is the statistical deviation between empirical observations and expected values predicted by the Kolmogorovian rule of additivity of probabilities of disjoint events in configurations such that of the double slit experiment. We propose an interpretation of quantum interference in the document ranking scenario, and examine how quantum interference can be effectively estimated for document retrieval. To validate our proposal and to gain more insights about approaches for document ranking, we (1) analyse PRP, qPRP and other ranking approaches, exposing the assumptions underlying their ranking criteria and formulating the conditions for the optimality of the two ranking principles, (2) empirically compare three ranking principles (i. e. PRP, interactive PRP, and qPRP) and two state-of-the-art ranking strategies in two retrieval scenarios, those of ad-hoc retrieval and diversity retrieval, (3) analytically contrast the ranking criteria of the examined approaches, exposing similarities and differences, (4) study the ranking behaviours of approaches alternative to PRP in terms of the kinematics they impose on relevant documents, i. e. by considering the extent and direction of the movements of relevant documents across the ranking recorded when comparing PRP against its alternatives. Our findings show that the effectiveness of the examined ranking approaches strongly depends upon the evaluation context. In the traditional evaluation context of ad-hoc retrieval, PRP is empirically shown to be better or comparable to alternative ranking approaches. However, when we turn to examine evaluation contexts that account for interdependent document relevance (i. e. when the relevance of a document is assessed also with respect to other retrieved documents, as it is the case in the diversity retrieval scenario) then the use of quantum probability theory and thus of qPRP is shown to improve retrieval and ranking effectiveness over the traditional PRP and alternative ranking strategies, such as Maximal Marginal Relevance, Portfolio theory, and Interactive PRP. This work represents a significant step forward regarding the use of quantum theory in information retrieval. It demonstrates in fact that the application of quantum theory to problems within information retrieval can lead to improvements both in modelling power and retrieval effectiveness, allowing the constructions of models that capture the complexity of information retrieval situations. Furthermore, the thesis opens up a number of lines for future research. These include: (1) investigating estimations and approximations of quantum interference in qPRP; (2) exploiting complex numbers for the representation of documents and queries, and; (3) applying the concepts underlying qPRP to tasks other than document ranking.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
This paper studies mechanisms to compensate local government for the public provision of environmental services using the theory of optimal fiscal transfers in India. Especially, we analyzed the role of intergovernmental fiscal transfers in achieving the environmental goal. Simply assigning the functions at appropriate levels does not ensure optimal provision of environmental services. Optimality in resource allocation could be achieved by combining the assignment system with an appropriate compensation mechanism. Intergovernmental fiscal transfers would be a suitable mechanism for compensating the local governments and help in internalizing the spillover effects of providing environmental public goods. Illustrations are also provided for India.
Resumo:
This paper presents a performance-based optimisation approach for conducting trade-off analysis between safety (roads) and condition (bridges and roads). Safety was based on potential for improvement (PFI). Road condition was based on surface distresses and bridge condition was based on apparent age per subcomponent. The analysis uses a non-monetised optimisation that expanded upon classical Pareto optimality by observing performance across time. It was found that achievement of good results was conditioned by the availability of early age treatments and impacted by a frontier effect preventing the optimisation algorithm from realising of the long-term benefits of deploying actions when approaching the end of the analysis period. A disaggregated bridge condition index proved capable of improving levels of service in bridge subcomponents.