48 resultados para optimality
Resumo:
Most existing research on maintenance optimisation for multi-component systems only considers the lifetime distribution of the components. When the condition-based maintenance (CBM) strategy is adopted for multi-component systems, the strategy structure becomes complex due to the large number of component states and their combinations. Consequently, some predetermined maintenance strategy structures are often assumed before the maintenance optimisation of a multi-component system in a CBM context. Developing these predetermined strategy structure needs expert experience and the optimality of these strategies is often not proofed. This paper proposed a maintenance optimisation method that does not require any predetermined strategy structure for a two-component series system. The proposed method is developed based on the semi-Markov decision process (SMDP). A simulation study shows that the proposed method can identify the optimal maintenance strategy adaptively for different maintenance costs and parameters of degradation processes. The optimal maintenance strategy structure is also investigated in the simulation study, which provides reference for further research in maintenance optimisation of multi-component systems.
Resumo:
Phylogenetic inference from sequences can be misled by both sampling (stochastic) error and systematic error (nonhistorical signals where reality differs from our simplified models). A recent study of eight yeast species using 106 concatenated genes from complete genomes showed that even small internal edges of a tree received 100% bootstrap support. This effective negation of stochastic error from large data sets is important, but longer sequences exacerbate the potential for biases (systematic error) to be positively misleading. Indeed, when we analyzed the same data set using minimum evolution optimality criteria, an alternative tree received 100% bootstrap support. We identified a compositional bias as responsible for this inconsistency and showed that it is reduced effectively by coding the nucleotides as purines and pyrimidines (RY-coding), reinforcing the original tree. Thus, a comprehensive exploration of potential systematic biases is still required, even though genome-scale data sets greatly reduce sampling error.
Resumo:
We study a political economy model which aims to understand the diversity in the growth and technology-adoption experiences in different economies. In this model the cost of technology adoption is endogenous and varies across heterogeneous agents. Agents in the model vote on the proportion of revenues allocated towards such expenditures. In the early stages of development, the political-economy outcome of the model ensures that a sub-optimal proportion of government revenue is used to finance adoption-cost reducing expenditures. This sub-optimality is due to the presence of inequality; agents at the lower end of the distribution favor a larger amount of revenue allocated towards redistribution in the form of lump-sum transfers. Eventually all individuals make the switch to the better technology and their incomes converge. The outcomes of the model therefore explain why public choice is more likely to be conservative in nature; it represents the majority choice given conflicting preferences among agents. Consequently, the transition path towards growth and technology adoption varies across countries depending on initial levels of inequality.
Resumo:
Originally developed in bioinformatics, sequence analysis is being increasingly used in social sciences for the study of life-course processes. The methodology generally employed consists in computing dissimilarities between the trajectories and, if typologies are sought, in clustering the trajectories according to their similarities or dissemblances. The choice of an appropriate dissimilarity measure is a major issue when dealing with sequence analysis for life sequences. Several dissimilarities are available in the literature, but neither of them succeeds to become indisputable. In this paper, instead of deciding upon one dissimilarity measure, we propose to use an optimal convex combination of different dissimilarities. The optimality is automatically determined by the clustering procedure and is defined with respect to the within-class variance.
Resumo:
Flow induced shear stress plays an important role in regulating cell growth and distribution in scaffolds. This study sought to correlate wall shear stress and chondrocytes activity for engineering design of micro-porous osteochondral grafts based on the hypothesis that it is possible to capture and discriminate between the transmitted force and cell response at the inner irregularities. Unlike common tissue engineering therapies with perfusion bioreactors in which flow-mediated stress is the controlling parameter, this work assigned the associated stress as a function of porosity to influence in vitro proliferation of chondrocytes. D-optimality criterion was used to accommodate three pore characteristics for appraisal in a mixed level fractional design of experiment (DOE); namely, pore size (4 levels), distribution pattern (2 levels) and density (3 levels). Micro-porous scaffolds (n=12) were fabricated according to the DOE using rapid prototyping of an acrylic-based bio-photopolymer. Computational fluid dynamics (CFD) models were created correspondingly and used on an idealized boundary condition with a Newtonian fluid domain to simulate the dynamic microenvironment inside the pores. In vitro condition was reproduced for the 3D printed constructs seeded by high pellet densities of human chondrocytes and cultured for 72 hours. The results showed that cell proliferation was significantly different in the constructs (p<0.05). Inlet fluid velocity of 3×10-2mms-1 and average shear stress of 5.65×10-2 Pa corresponded with increased cell proliferation for scaffolds with smaller pores in hexagonal pattern and lower densities. Although the analytical solution of a Poiseuille flow inside the pores was found insufficient for the description of the flow profile probably due to the outside flow induced turbulence, it showed that the shear stress would increase with cell growth and decrease with pore size. This correlation demonstrated the basis for determining the relation between the induced stress and chondrocyte activity to optimize microfabrication of engineered cartilaginous constructs.
Resumo:
In this thesis we investigate the use of quantum probability theory for ranking documents. Quantum probability theory is used to estimate the probability of relevance of a document given a user's query. We posit that quantum probability theory can lead to a better estimation of the probability of a document being relevant to a user's query than the common approach, i. e. the Probability Ranking Principle (PRP), which is based upon Kolmogorovian probability theory. Following our hypothesis, we formulate an analogy between the document retrieval scenario and a physical scenario, that of the double slit experiment. Through the analogy, we propose a novel ranking approach, the quantum probability ranking principle (qPRP). Key to our proposal is the presence of quantum interference. Mathematically, this is the statistical deviation between empirical observations and expected values predicted by the Kolmogorovian rule of additivity of probabilities of disjoint events in configurations such that of the double slit experiment. We propose an interpretation of quantum interference in the document ranking scenario, and examine how quantum interference can be effectively estimated for document retrieval. To validate our proposal and to gain more insights about approaches for document ranking, we (1) analyse PRP, qPRP and other ranking approaches, exposing the assumptions underlying their ranking criteria and formulating the conditions for the optimality of the two ranking principles, (2) empirically compare three ranking principles (i. e. PRP, interactive PRP, and qPRP) and two state-of-the-art ranking strategies in two retrieval scenarios, those of ad-hoc retrieval and diversity retrieval, (3) analytically contrast the ranking criteria of the examined approaches, exposing similarities and differences, (4) study the ranking behaviours of approaches alternative to PRP in terms of the kinematics they impose on relevant documents, i. e. by considering the extent and direction of the movements of relevant documents across the ranking recorded when comparing PRP against its alternatives. Our findings show that the effectiveness of the examined ranking approaches strongly depends upon the evaluation context. In the traditional evaluation context of ad-hoc retrieval, PRP is empirically shown to be better or comparable to alternative ranking approaches. However, when we turn to examine evaluation contexts that account for interdependent document relevance (i. e. when the relevance of a document is assessed also with respect to other retrieved documents, as it is the case in the diversity retrieval scenario) then the use of quantum probability theory and thus of qPRP is shown to improve retrieval and ranking effectiveness over the traditional PRP and alternative ranking strategies, such as Maximal Marginal Relevance, Portfolio theory, and Interactive PRP. This work represents a significant step forward regarding the use of quantum theory in information retrieval. It demonstrates in fact that the application of quantum theory to problems within information retrieval can lead to improvements both in modelling power and retrieval effectiveness, allowing the constructions of models that capture the complexity of information retrieval situations. Furthermore, the thesis opens up a number of lines for future research. These include: (1) investigating estimations and approximations of quantum interference in qPRP; (2) exploiting complex numbers for the representation of documents and queries, and; (3) applying the concepts underlying qPRP to tasks other than document ranking.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
This paper studies mechanisms to compensate local government for the public provision of environmental services using the theory of optimal fiscal transfers in India. Especially, we analyzed the role of intergovernmental fiscal transfers in achieving the environmental goal. Simply assigning the functions at appropriate levels does not ensure optimal provision of environmental services. Optimality in resource allocation could be achieved by combining the assignment system with an appropriate compensation mechanism. Intergovernmental fiscal transfers would be a suitable mechanism for compensating the local governments and help in internalizing the spillover effects of providing environmental public goods. Illustrations are also provided for India.
Resumo:
This paper presents a performance-based optimisation approach for conducting trade-off analysis between safety (roads) and condition (bridges and roads). Safety was based on potential for improvement (PFI). Road condition was based on surface distresses and bridge condition was based on apparent age per subcomponent. The analysis uses a non-monetised optimisation that expanded upon classical Pareto optimality by observing performance across time. It was found that achievement of good results was conditioned by the availability of early age treatments and impacted by a frontier effect preventing the optimisation algorithm from realising of the long-term benefits of deploying actions when approaching the end of the analysis period. A disaggregated bridge condition index proved capable of improving levels of service in bridge subcomponents.
Resumo:
We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between [square root T] and [log T]. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.
Resumo:
The appealing concept of optimal harvesting is often used in fisheries to obtain new management strategies. However, optimality depends on the objective function, which often varies, reflecting the interests of different groups of people. The aim of maximum sustainable yield is to extract the greatest amount of food from replenishable resources in a sustainable way. Maximum sustainable yield may not be desirable from an economic point of view. Maximum economic yield that maximizes the profit of fishing fleets (harvesting sector) but ignores socio-economic benefits such as employment and other positive externalities. It may be more appropriate to use the maximum economic yield that which is based on the value chain of the overall fishing sector, to reflect better society's interests. How to make more efficient use of a fishery for society rather than fishing operators depends critically on the gain function parameters including multiplier effects and inclusion or exclusion of certain costs. In particular, the optimal effort level based on the overall value chain moves closer to the optimal effort for the maximum sustainable yield because of the multiplier effect. These issues are illustrated using the Australian Northern Prawn Fishery.
Resumo:
Sampling strategies are developed based on the idea of ranked set sampling (RSS) to increase efficiency and therefore to reduce the cost of sampling in fishery research. The RSS incorporates information on concomitant variables that are correlated with the variable of interest in the selection of samples. For example, estimating a monitoring survey abundance index would be more efficient if the sampling sites were selected based on the information from previous surveys or catch rates of the fishery. We use two practical fishery examples to demonstrate the approach: site selection for a fishery-independent monitoring survey in the Australian northern prawn fishery (NPF) and fish age prediction by simple linear regression modelling a short-lived tropical clupeoid. The relative efficiencies of the new designs were derived analytically and compared with the traditional simple random sampling (SRS). Optimal sampling schemes were measured by different optimality criteria. For the NPF monitoring survey, the efficiency in terms of variance or mean squared errors of the estimated mean abundance index ranged from 114 to 199% compared with the SRS. In the case of a fish ageing study for Tenualosa ilisha in Bangladesh, the efficiency of age prediction from fish body weight reached 140%.
Resumo:
Yao, Begg, and Livingston (1996, Biometrics 52, 992-1001) considered the optimal group size for testing a series of potentially therapeutic agents to identify a promising one as soon as possible for given error rates. The number of patients to be tested with each agent was fixed as the group size. We consider a sequential design that allows early acceptance and rejection, and we provide an optimal strategy to minimize the sample sizes (patients) required using Markov decision processes. The minimization is under the constraints of the two types (false positive and false negative) of error probabilities, with the Lagrangian multipliers corresponding to the cost parameters for the two types of errors. Numerical studies indicate that there can be a substantial reduction in the number of patients required.
Resumo:
A decision-theoretic framework is proposed for designing sequential dose-finding trials with multiple outcomes. The optimal strategy is solvable theoretically via backward induction. However, for dose-finding studies involving k doses, the computational complexity is the same as the bandit problem with k-dependent arms, which is computationally prohibitive. We therefore provide two computationally compromised strategies, which is of practical interest as the computational complexity is greatly reduced: one is closely related to the continual reassessment method (CRM), and the other improves CRM and approximates to the optimal strategy better. In particular, we present the framework for phase I/II trials with multiple outcomes. Applications to a pediatric HIV trial and a cancer chemotherapy trial are given to illustrate the proposed approach. Simulation results for the two trials show that the computationally compromised strategy can perform well and appear to be ethical for allocating patients. The proposed framework can provide better approximation to the optimal strategy if more extensive computing is available.
Resumo:
This article is motivated by a lung cancer study where a regression model is involved and the response variable is too expensive to measure but the predictor variable can be measured easily with relatively negligible cost. This situation occurs quite often in medical studies, quantitative genetics, and ecological and environmental studies. In this article, by using the idea of ranked-set sampling (RSS), we develop sampling strategies that can reduce cost and increase efficiency of the regression analysis for the above-mentioned situation. The developed method is applied retrospectively to a lung cancer study. In the lung cancer study, the interest is to investigate the association between smoking status and three biomarkers: polyphenol DNA adducts, micronuclei, and sister chromatic exchanges. Optimal sampling schemes with different optimality criteria such as A-, D-, and integrated mean square error (IMSE)-optimality are considered in the application. With set size 10 in RSS, the improvement of the optimal schemes over simple random sampling (SRS) is great. For instance, by using the optimal scheme with IMSE-optimality, the IMSEs of the estimated regression functions for the three biomarkers are reduced to about half of those incurred by using SRS.