922 resultados para Chance-constrained programming
Resumo:
The provision of autonomy supportive environments that promote physical activity engagement have become popular in contemporary youth settings. However, questions remain about whether adolescent perceptions of their autonomy have implications for physical activity. The purpose of this investigation was to examine the association between adolescents’ self-reported physical activity and their perceived autonomy. Participants (n = 384 adolescents) aged between 12 and 15 years were recruited from six secondary schools in metropolitan Brisbane, Australia. Self-reported measures of physical activity and autonomy were obtained. Logistic regression with inverse probability weights were used to examine the association between autonomy and the odds of meeting youth physical activity guidelines. Autonomy (OR 0.61, 95% CI 0.49-0.76) and gender (OR 0.62, 95% CI 0.46-0.83) were negatively associated with meeting physical activity guidelines. However, the model explained only a small amount of the variation in whether youth in this sample met physical activity guidelines (R2 = 0.023). For every 1 unit decrease in autonomy (on an index from 1 to 5), participants were 1.64 times more likely to meet physical activity guidelines. The findings, which are at odds with several previous studies, suggest that interventions designed to facilitate youth physical activity should limit opportunities for youth to make independent decisions about their engagement. However, the small amount of variation explained by the predictors in the model is a caveat, and should be considered prior to applying such suggestions in practical settings. Future research should continue to examine a larger age range, longitudinal observational or intervention studies to examine assertions of causality, as well as objective measurement of physical activity.
Resumo:
In the mining optimisation literature, most researchers focused on two strategic-level and tactical-level open-pit mine optimisation problems, which are respectively termed ultimate pit limit (UPIT) or constrained pit limit (CPIT). However, many researchers indicate that the substantial numbers of variables and constraints in real-world instances (e.g., with 50-1000 thousand blocks) make the CPIT’s mixed integer programming (MIP) model intractable for use. Thus, it becomes a considerable challenge to solve the large scale CPIT instances without relying on exact MIP optimiser as well as the complicated MIP relaxation/decomposition methods. To take this challenge, two new graph-based algorithms based on network flow graph and conjunctive graph theory are developed by taking advantage of problem properties. The performance of our proposed algorithms is validated by testing recent large scale benchmark UPIT and CPIT instances’ datasets of MineLib in 2013. In comparison to best known results from MineLib, it is shown that the proposed algorithms outperform other CPIT solution approaches existing in the literature. The proposed graph-based algorithms leads to a more competent mine scheduling optimisation expert system because the third-party MIP optimiser is no longer indispensable and random neighbourhood search is not necessary.
Resumo:
In this paper, we look at the concept of reversibility, that is, negating opposites, counterbalances, and actions that can be reversed. Piaget identified reversibility as an indicator of the ability to reason at a concrete operational level. We investigate to what degree novice programmers manifest the ability to work with this concept of reversibility by providing them with a small piece of code and then asking them to write code that undoes the effect of that code. On testing entire cohorts of students in their first year of learning to program, we found an overwhelming majority of them could not cope with such a concept. We then conducted think aloud studies of novices where we observed them working on this task and analyzed their contrasting abilities to deal with it. The results of this study demonstrate the need for better understanding our students' reasoning abilities, and a teaching model aimed at that level of reality.
Resumo:
We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose a technique based on stochastic convex optimization and give bounds that show that the performance of our algorithm approaches the best achievable by any policy in the comparison class. Most importantly, this result depends on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithm in a queuing application.
Resumo:
Over the last few decades, there has been a significant land cover (LC) change across the globe due to the increasing demand of the burgeoning population and urban sprawl. In order to take account of the change, there is a need for accurate and up-to-date LC maps. Mapping and monitoring of LC in India is being carried out at national level using multi-temporal IRS AWiFS data. Multispectral data such as IKONOS, Landsat-TM/ETM+, IRS-ICID LISS-III/IV, AWiFS and SPOT-5, etc. have adequate spatial resolution (similar to 1m to 56m) for LC mapping to generate 1:50,000 maps. However, for developing countries and those with large geographical extent, seasonal LC mapping is prohibitive with data from commercial sensors of limited spatial coverage. Superspectral data from the MODIS sensor are freely available, have better temporal (8 day composites) and spectral information. MODIS pixels typically contain a mixture of various LC types (due to coarse spatial resolution of 250, 500 and 1000 in), especially in more fragmented landscapes. In this context, linear spectral unmixing would be useful for mapping patchy land covers, such as those that characterise much of the Indian subcontinent. This work evaluates the existing unmixing technique for LC mapping using MODIS data, using end-members that are extracted through Pixel Purity Index (PPI), Scatter plot and N-dimensional visualisation. The abundance maps were generated for agriculture, built up, forest, plantations, waste land/others and water bodies. The assessment of the results using ground truth and a LISS-III classified map shows 86% overall accuracy, suggesting the potential for broad-scale applicability of the technique with superspectral data for natural resource planning and inventory applications. Index Terms-Remote sensing, digital
Resumo:
Combining the philosophies of nonlinear model predictive control and approximate dynamic programming, a new suboptimal control design technique is presented in this paper, named as model predictive static programming (MPSP), which is applicable for finite-horizon nonlinear problems with terminal constraints. This technique is computationally efficient, and hence, can possibly be implemented online. The effectiveness of the proposed method is demonstrated by designing an ascent phase guidance scheme for a ballistic missile propelled by solid motors. A comparison study with a conventional gradient method shows that the MPSP solution is quite close to the optimal solution.
Resumo:
The random early detection (RED) technique has seen a lot of research over the years. However, the functional relationship between RED performance and its parameters viz,, queue weight (omega(q)), marking probability (max(p)), minimum threshold (min(th)) and maximum threshold (max(th)) is not analytically availa ble. In this paper, we formulate a probabilistic constrained optimization problem by assuming a nonlinear relationship between the RED average queue length and its parameters. This problem involves all the RED parameters as the variables of the optimization problem. We use the barrier and the penalty function approaches for its Solution. However (as above), the exact functional relationship between the barrier and penalty objective functions and the optimization variable is not known, but noisy samples of these are available for different parameter values. Thus, for obtaining the gradient and Hessian of the objective, we use certain recently developed simultaneous perturbation stochastic approximation (SPSA) based estimates of these. We propose two four-timescale stochastic approximation algorithms based oil certain modified second-order SPSA updates for finding the optimum RED parameters. We present the results of detailed simulation experiments conducted over different network topologies and network/traffic conditions/settings, comparing the performance of Our algorithms with variants of RED and a few other well known adaptive queue management (AQM) techniques discussed in the literature.
Resumo:
Purpose Ethnic entrepreneurship is, and always has been, a means of survival. However, there is limited literature on ethnic entrepreneurship in Australia and therefore, an understanding of ethnic entrepreneurs’ motivations to become self-employed. The purpose of this paper is to report the influential factors in the decision to engage in self-employment through case studies of members of Melbourne’s Sri Lankan community informed by the mixed embeddedness approach. Design/methodology/approach The mixed embeddedness approach frames the study where the authors examine the motivations for business of five Sri Lankan entrepreneurs. Narratives are used to construct individual case studies, which are then analyzed in terms of the motivations for, resources used and challenges faced on the entrepreneurial journey. Findings For these ethnic entrepreneurs, their entrepreneurial activity results from a dynamic match between local market opportunities and the specific ethnic resources available to them at the time of founding. The self-employment decision was not prompted by a lack of human capital but an inability to use that human capital in alternative means of employment at specific points in time. Moreover the authors highlight the importance of social and cultural capital as resources used to overcome challenges on the entrepreneurial journey. Originality/value In this community, entrepreneurship was not a result of a lack of human capital but how it was utilized in combination with social and cultural capitals in the given opportunity structure. The mixed embeddedness approach enables the uncovering of how ethnic network ties were used in light of the opportunities available to build entrepreneurial activity.
Resumo:
This paper presents the programming an FPGA (Field Programmable Gate Array) to emulate the dynamics of DC machines. FPGA allows high speed real time simulation with high precision. The described design includes block diagram representation of DC machine, which contain all arithmetic and logical operations. The real time simulation of the machine in FPGA is controlled by user interfaces they are Keypad interface, LCD display on-line and digital to analog converter. This approach provides emulation of electrical machine by changing the parameters. Separately Exited DC machine implemented and experimental results are presented.
Resumo:
We have investigated structural transitions in Poly(dG-dC) and Poly(dG-Me5dC) in order to understand the exact role of cations in stabilizing left-handed helical structures in specific sequences andthe biological role, if any, of these structures. From a novel temperature dependent transition it has been shown that a minor fluctuation in Na+ concentration at ambient temperature can bring about Β to Ζ transition. Forthe first time, wehave observed a novel double transition in poly(dG-Me5dC) as the Na+ concentration is gradually increased. This suggests that a minor fluctuation in Na+ concentration in conjunction with methylation may transform small stretches of CG sequences from one conformational state to another. These stretches could probably serve as sites for regulation. Supercoiled formV DNA reconstituted from pBR322 and pßG plasmids have been studied as model systems, in order to understand the nature and role of left-handed helical conformation in natural sequences. A large portion of DNA in form V, obtained by reannealing the two complementary singlestranded circles is forced to adopt left-handed double helical structure due to topological constraints (Lk = 0). Binding studies with Z-DNA specific antibody and spectroscopic studies confirm the presence of left-handed Z-structure in the pßG and pßR322 form V DNA. Cobalt hexamine chloride, which induces Z-form in Poly(dG-dC) stabilizes the Z-conformation in form V DNA even in the non-alternating purine-pyrimidine sequences. A reverse effect is observed with ethidium bromide. Interestingly, both topoisomerase I and II (from wheat germ) act effectively on form V DNA to give rise to a species having an electrophoretic mobility on agarose gel similar to that of open circular (form II) DNA. Whether this molecule is formed as a result of the left-handed helical segments of form V DNA undergoing a transition to the right-handed B-form during the topoisomerase action remains to be solved.
Resumo:
A new method of specifying the syntax of programming languages, known as hierarchical language specifications (HLS), is proposed. Efficient parallel algorithms for parsing languages generated by HLS are presented. These algorithms run on an exclusive-read exclusive-write parallel random-access machine. They require O(n) processors and O(log2n) time, where n is the length of the string to be parsed. The most important feature of these algorithms is that they do not use a stack.
Resumo:
We explore the use of Gittins indices to search for near optimality in sequential clinical trials. Some adaptive allocation rules are proposed to achieve the following two objectives as far as possible: (i) to reduce the expected successes lost, (ii) to minimize the error probability at the end. Simulation results indicate the merits of the rules based on Gittins indices for small trial sizes. The rules are generalized to the case when neither of the response densities is known. Asymptotic optimality is derived for the constrained rules. A simple allocation rule is recommended for one-stage models. The simulation results indicate that it works better than both equal allocation and Bather's randomized allocation. We conclude with a discussion of possible further developments.
Resumo:
With many innovations in process technology, forging is establishing itself as a precision manufacturing process: as forging is used to produce complex shapes in difficult materials, it requires dies of complex configuration of high strength and of wear-resistant materials. Extensive research and development work is being undertaken, internationally, to analyse the stresses in forging dies and the flow of material in forged components. Identification of the location, size and shape of dead-metal zones is required for component design. Further, knowledge of the strain distribution in the flowing metal indicates the degree to which the component is being work hardened. Such information is helpful in the selection of process parameters such as dimensional allowances and interface lubrication, as well as in the determination of post-forging operations such as heat treatment and machining. In the presently reported work the effect of aperture width and initial specimen height on the strain distribution in the plane-strain extrusion forging of machined lead billets is observed: the distortion of grids inscribed on the face of the specimen gives the strain distribution. The stress-equilibrium approach is used to optimise a model of flow in extrusion forging, which model is found to be effective in estimating the size of the dead-metal zone. The work carried out so far indicates that the methodology of using the stress-equilibrium approach to develop models of flow in closed-die forging can be a useful tool in component, process and die design.
Resumo:
Learning mathematics is a complex and dynamic process. In this paper, the authors adopt a semiotic framework (Yeh & Nason, 2004) and highlight programming as one of the main aspects of the semiosis or meaning-making for the learning of mathematics. During a 10-week teaching experiment, mathematical meaning-making was enriched when primary students wrote Logo programs to create 3D virtual worlds. The analysis of results found deep learning in mathematics, as well as in technology and engineering areas. This prompted a rethinking about the nature of learning mathematics and a need to employ and examine a more holistic learning approach for the learning in science, technology, engineering, and mathematics (STEM) areas.
Resumo:
Numerically discretized dynamic optimization problems having active inequality and equality path constraints that along with the dynamics induce locally high index differential algebraic equations often cause the optimizer to fail in convergence or to produce degraded control solutions. In many applications, regularization of the numerically discretized problem in direct transcription schemes by perturbing the high index path constraints helps the optimizer to converge to usefulm control solutions. For complex engineering problems with many constraints it is often difficult to find effective nondegenerat perturbations that produce useful solutions in some neighborhood of the correct solution. In this paper we describe a numerical discretization that regularizes the numerically consistent discretized dynamics and does not perturb the path constraints. For all values of the regularization parameter the discretization remains numerically consistent with the dynamics and the path constraints specified in the, original problem. The regularization is quanti. able in terms of time step size in the mesh and the regularization parameter. For full regularized systems the scheme converges linearly in time step size.The method is illustrated with examples.