291 resultados para Convex piecewise-linear costs
Resumo:
This paper formulates an analytically tractable problem for the wake generated by a long flat bottom ship by considering the steady free surface flow of an inviscid, incompressible fluid emerging from beneath a semi-infinite rigid plate. The flow is considered to be irrotational and two-dimensional so that classical potential flow methods can be exploited. In addition, it is assumed that the draft of the plate is small compared to the depth of the channel. The linearised problem is solved exactly using a Fourier transform and the Wiener-Hopf technique, and it is shown that there is a family of subcritical solutions characterised by a train of sinusoidal waves on the downstream free surface. The amplitude of these waves decreases as the Froude number increases. Supercritical solutions are also obtained, but, in general, these have infinite vertical velocities at the trailing edge of the plate. Consideration of further terms in the expansions suggests a way of canceling the singularity for certain values of the Froude number.
Resumo:
This paper is a report of students' responses to instruction which was based on the use of concrete representations to solve linear equations. The sample consisted of 21 Grade 8 students from a middle-class suburban state secondary school with a reputation for high academic standards and innovative mathematics teaching. The students were interviewed before and after instruction. Interviews and classroom interactions were observed and videotaped. A qualitative analysis of the responses revealed that students did not use the materials in solving problems. The increased processing load caused by concrete representations is hypothesised as a reason.
Resumo:
This work reviews the rationale and processes for raising revenue and allocating funds to perform information intensive activities that are pertinent to the work of democratic government. ‘Government of the people, by the people, for the people’ expresses an idea that democratic government has no higher authority than the people who agree to be bound by its rules. Democracy depends on continually learning how to develop understandings and agreements that can sustain voting majorities on which democratic law making and collective action depends. The objective expressed in constitutional terms is to deliver ‘peace, order and good government’. Meeting this objective requires a collective intellectual authority that can understand what is possible; and a collective moral authority to understand what ought to happen in practice. Facts of life determine that a society needs to retain its collective competence despite a continual turnover of its membership as people die but life goes on. Retaining this ‘collective competence’ in matters of self-government depends on each new generation: • acquiring a collective knowledge of how to produce goods and services needed to sustain a society and its capacity for self-government; • Learning how to defend society diplomatically and militarily in relation to external forces to prevent overthrow of its self-governing capacity; and • Learning how to defend society against divisive internal forces to preserve the authority of representative legislatures, allow peaceful dispute resolution and maintain social cohesion.
Resumo:
A review of the literature related to issues involved in irrigation induced agricultural development (IIAD) reveals that: (1) the magnitude, sensitivity and distribution of social welfare of IIAD is not fully analysed; (2) the impacts of excessive pesticide use on farmers’ health are not adequately explained; (3) no analysis estimates the relationship between farm level efficiency and overuse of agro-chemical inputs under imperfect markets; and (4) the method of incorporating groundwater extraction costs is misleading. This PhD thesis investigates these issues by using primary data, along with secondary data from Sri Lanka. The overall findings of the thesis can be summarised as follows. First, the thesis demonstrates that Sri Lanka has gained a positive welfare change as a result of introducing new irrigation technology. The change in the consumer surplus is Rs.48,236 million, while the change in the producer surplus is Rs. 14,274 millions between 1970 and 2006. The results also show that the long run benefits and costs of IIAD depend critically on the magnitude of the expansion of the irrigated area, as well as the competition faced by traditional farmers (agricultural crowding out effects). The traditional sector’s ability to compete with the modern sector depends on productivity improvements, reducing production costs and future structural changes (spillover effects). Second, the thesis findings on pesticides used for agriculture show that, on average, a farmer incurs a cost of approximately Rs. 590 to 800 per month during a typical cultivation period due to exposure to pesticides. It is shown that the value of average loss in earnings per farmer for the ‘hospitalised’ sample is Rs. 475 per month, while it is approximately Rs. 345 per month for the ‘general’ farmers group during a typical cultivation season. However, the average willingness to pay (WTP) to avoid exposure to pesticides is approximately Rs. 950 and Rs. 620 for ‘hospitalised’ and ‘general’ farmers’ samples respectively. The estimated percentage contribution for WTP due to health costs, lost earnings, mitigating expenditure, and disutility are 29, 50, 5 and 16 per cent respectively for hospitalised farmers, while they are 32, 55, 8 and 5 per cent respectively for ‘general’ farmers. It is also shown that given market imperfections for most agricultural inputs, farmers are overusing pesticides with the expectation of higher future returns. This has led to an increase in inefficiency in farming practices which is not understood by the farmers. Third, it is found that various groundwater depletion studies in the economics literature have provided misleading optimal water extraction quantity levels. This is due to a failure to incorporate all production costs in the relevant models. It is only by incorporating quality changes to quantity deterioration, that it is possible to derive socially optimal levels. Empirical results clearly show that the benefits per hectare per month considering both the avoidance costs of deepening agro-wells by five feet from the existing average, as well as the avoidance costs of maintaining the water salinity level at 1.8 (mmhos/Cm), is approximately Rs. 4,350 for farmers in the Anuradhapura district and Rs. 5,600 for farmers in the Matale district.
Resumo:
Background This economic evaluation reports the results of a detailed study of the cost of major trauma treated at Princess Alexandra Hospital (PAH), Australia. Methods A bottom-up approach was used to collect and aggregate the direct and indirect costs generated by a sample of 30 inpatients treated for major trauma at PAH in 2004. Major trauma was defined as an admission for Multiple Significant Trauma with an Injury Severity Score >15. Direct and indirect costs were amalgamated from three sources, (1) PAH inpatient costs, (2) Medicare Australia, and (3) a survey instrument. Inpatient costs included the initial episode of inpatient care including clinical and outpatient services and any subsequent representations for ongoing-related medical treatment. Medicare Australia provided an itemized list of pharmaceutical and ambulatory goods and services. The survey instrument collected out-of-pocket expenses and opportunity cost of employment forgone. Inpatient data obtained from a publically funded trauma registry were used to control for any potential bias in our sample. Costs are reported in Australian dollars for 2004 and 2008. Results The average direct and indirect costs of major trauma incurred up to 1-year postdischarge were estimated to be A$78,577 and A$24,273, respectively. The aggregate costs, for the State of Queensland, were estimated to range from A$86.1 million to $106.4 million in 2004 and from A$135 million to A$166.4 million in 2008. Conclusion These results demonstrate that (1) the costs of major trauma are significantly higher than previously reported estimates and (2) the cost of readmissions increased inpatient costs by 38.1%.
Resumo:
This paper develops a general theory of validation gating for non-linear non-Gaussian mod- els. Validation gates are used in target tracking to cull very unlikely measurement-to-track associa- tions, before remaining association ambiguities are handled by a more comprehensive (and expensive) data association scheme. The essential property of a gate is to accept a high percentage of correct associ- ations, thus maximising track accuracy, but provide a su±ciently tight bound to minimise the number of ambiguous associations. For linear Gaussian systems, the ellipsoidal vali- dation gate is standard, and possesses the statistical property whereby a given threshold will accept a cer- tain percentage of true associations. This property does not hold for non-linear non-Gaussian models. As a system departs from linear-Gaussian, the ellip- soid gate tends to reject a higher than expected pro- portion of correct associations and permit an excess of false ones. In this paper, the concept of the ellip- soidal gate is extended to permit correct statistics for the non-linear non-Gaussian case. The new gate is demonstrated by a bearing-only tracking example.
Resumo:
Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.
Resumo:
In 2008, a three-year pilot ‘pay for performance’ (P4P) program, known as ‘Clinical Practice Improvement Payment’ (CPIP) was introduced into Queensland Health (QHealth). QHealth is a large public health sector provider of acute, community, and public health services in Queensland, Australia. The organisation has recently embarked on a significant reform agenda including a review of existing funding arrangements (Duckett et al., 2008). Partly in response to this reform agenda, a casemix funding model has been implemented to reconnect health care funding with outcomes. CPIP was conceptualised as a performance-based scheme that rewarded quality with financial incentives. This is the first time such a scheme has been implemented into the public health sector in Australia with a focus on rewarding quality, and it is unique in that it has a large state-wide focus and includes 15 Districts. CPIP initially targeted five acute and community clinical areas including Mental Health, Discharge Medication, Emergency Department, Chronic Obstructive Pulmonary Disease, and Stroke. The CPIP scheme was designed around key concepts including the identification of clinical indicators that met the set criteria of: high disease burden, a well defined single diagnostic group or intervention, significant variations in clinical outcomes and/or practices, a good evidence, and clinician control and support (Ward, Daniels, Walker & Duckett, 2007). This evaluative research targeted Phase One of implementation of the CPIP scheme from January 2008 to March 2009. A formative evaluation utilising a mixed methodology and complementarity analysis was undertaken. The research involved three research questions and aimed to determine the knowledge, understanding, and attitudes of clinicians; identify improvements to the design, administration, and monitoring of CPIP; and determine the financial and economic costs of the scheme. Three key studies were undertaken to ascertain responses to the key research questions. Firstly, a survey of clinicians was undertaken to examine levels of knowledge and understanding and their attitudes to the scheme. Secondly, the study sought to apply Statistical Process Control (SPC) to the process indicators to assess if this enhanced the scheme and a third study examined a simple economic cost analysis. The CPIP Survey of clinicians elicited 192 clinician respondents. Over 70% of these respondents were supportive of the continuation of the CPIP scheme. This finding was also supported by the results of a quantitative altitude survey that identified positive attitudes in 6 of the 7 domains-including impact, awareness and understanding and clinical relevance, all being scored positive across the combined respondent group. SPC as a trending tool may play an important role in the early identification of indicator weakness for the CPIP scheme. This evaluative research study supports a previously identified need in the literature for a phased introduction of Pay for Performance (P4P) type programs. It further highlights the value of undertaking a formal risk assessment of clinician, management, and systemic levels of literacy and competency with measurement and monitoring of quality prior to a phased implementation. This phasing can then be guided by a P4P Design Variable Matrix which provides a selection of program design options such as indicator target and payment mechanisms. It became evident that a clear process is required to standardise how clinical indicators evolve over time and direct movement towards more rigorous ‘pay for performance’ targets and the development of an optimal funding model. Use of this matrix will enable the scheme to mature and build the literacy and competency of clinicians and the organisation as implementation progresses. Furthermore, the research identified that CPIP created a spotlight on clinical indicators and incentive payments of over five million from a potential ten million was secured across the five clinical areas in the first 15 months of the scheme. This indicates that quality was rewarded in the new QHealth funding model, and despite issues being identified with the payment mechanism, funding was distributed. The economic model used identified a relative low cost of reporting (under $8,000) as opposed to funds secured of over $300,000 for mental health as an example. Movement to a full cost effectiveness study of CPIP is supported. Overall the introduction of the CPIP scheme into QHealth has been a positive and effective strategy for engaging clinicians in quality and has been the catalyst for the identification and monitoring of valuable clinical process indicators. This research has highlighted that clinicians are supportive of the scheme in general; however, there are some significant risks that include the functioning of the CPIP payment mechanism. Given clinician support for the use of a pay–for-performance methodology in QHealth, the CPIP scheme has the potential to be a powerful addition to a multi-faceted suite of quality improvement initiatives within QHealth.
Resumo:
Soil organic carbon (C) sequestration rates based on the Intergovernmental Panel for Climate Change (IPCC) methodology were combined with local economic data to simulate the economic potential for C sequestration in response to conservation tillage in the six agro-ecological zones within the Southern Region of the Australian grains industry. The net C sequestration rate over 20 years for the Southern Region (which includes discounting for associated greenhouse gases) is estimated to be 3.6 or 6.3 Mg C/ha after converting to either minimum or no-tillage practices, respectively, with no-till practices estimated to return 75% more carbon on average than minimum tillage. The highest net gains in C per ha are realised when converting from conventional to no-tillage practices in the high-activity clay soils of the High Rainfall and Wimmera agro-ecological zones. On the basis of total area available for change, the Slopes agro-ecological zone offers the highest net returns, potentially sequestering an additional 7.1 Mt C under no-tillage scenario over 20 years. The economic analysis was summarised as C supply curves for each of the 6 zones expressing the total additional C accumulated over 20 years for a price per t C sequestered ranging from zero to AU$200. For a price of $50/Mg C, a total of 427 000 Mg C would be sequestered over 20 years across the Southern Region, <5% of the simulated C sequestration potential of 9.1 Mt for the region. The Wimmera and Mid-North offer the largest gains in C under minimum tillage over 20 years of all zones for all C prices. For the no-tillage scenario, for a price of $50/Mg C, 1.74 Mt C would be sequestered over 20 years across the Southern Region, <10% of the simulated C sequestration potential of 18.6 Mt for the region over 20 years. The Slopes agro-ecological zone offers the best return in C over 20 years under no-tillage for all C prices. The Mallee offers the least return for both minimum and no-tillage scenarios. At a price of $200/Mg C, the transition from conventional tillage to minimum or no-tillage practices will only realise 19% and 33%, respectively, of the total biogeochemical sequestration potential of crop and pasture systems of the Southern Region over a 20-year period.
Resumo:
Recently, a constraints- led approach has been promoted as a framework for understanding how children and adults acquire movement skills for sport and exercise (see Davids, Button & Bennett, 2008; Araújo et al., 2004). The aim of a constraints- led approach is to identify the nature of interacting constraints that influence skill acquisition in learners. In this chapter the main theoretical ideas behind a constraints- led approach are outlined to assist practical applications by sports practitioners and physical educators in a non- linear pedagogy (see Chow et al., 2006, 2007). To achieve this goal, this chapter examines implications for some of the typical challenges facing sport pedagogists and physical educators in the design of learning programmes.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.