428 resultados para Default penalties
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
In this paper we advocate for the continued need for consumer protection and fair trading regulation, even in competitive markets. For the purposes of this paper a ‘competitive market’ is defined as one that has low barriers to entry and exit, with homogenous products and services and numerous suppliers. Whilst competition is an important tool for providing consumer benefits, it will not be sufficient to protect at least some consumers, particularly vulnerable, low income consumers. For this reason, we argue, setting competition as the ‘end goal’ and assuming that consumer protection and consumer benefits will always follow, is a flawed regulatory approach. The ‘end goal’ should surely be consumer protection and fair markets, and a combination of competition law and consumer protection law should be applied in order to achieve those goals.
Resumo:
A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows one to take into account that learning within a small model can be much easier than within a large one. Requiring this “strong margin adaptivity” makes the model selection problem more challenging. We first prove, in a general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.
Resumo:
This book explores the application of concepts of fiduciary duty or public trust in responding to the policy and governance challenges posed by policy problems that extend over multiple terms of government or even, as in the case of climate change, human generations. The volume brings together a range of perspectives including leading international thinkers on questions of fiduciary duty and public trust, Australia's most prominent judicial advocate for the application of fiduciary duty, top law scholars from several major universities, expert commentary from an influential climate policy think-tank and the views of long-serving highly respected past and present parliamentarians. The book presents a detailed examination of the nature and extent of fiduciary duty, looking at the example of Australia and having regard to developments in comparable jurisdictions. It identifies principles that could improve the accountability of political actors for their responses to major problems that may extend over multiple electoral cycles.
Resumo:
To facilitate the implementation of workflows, enterprise and workflow system vendors typically provide workflow templates for their software. Each of these templates depicts a variant of how the software supports a certain business process, allowing the user to save the effort of creating models and links to system components from scratch by selecting and activating the appropriate template. A combination of the strengths from different templates is however only achievable by manually adapting the templates which is cumbersome. We therefore suggest in this paper to combine different workflow templates into a single configurable workflow template. Using the workflow modeling language of SAP’s WebFlow engine, we show how such a configurable workflow modeling language can be created by identifying the configurable elements in the original language. Requirements imposed on configurations inhibit invalid configurations. Based on a default configuration such configurable templates can be used as easy as the traditional templates. The suggested approach is also applicable to other workflow modeling languages
Resumo:
Traffic Simulation models tend to have their own data input and output formats. In an effort to standardise the input for traffic simulations, we introduce in this paper a set of data marts that aim to serve as a common interface between the necessaary data, stored in dedicated databases, and the swoftware packages, that require the input in a certain format. The data marts are developed based on real world objects (e.g. roads, traffic lights, controllers) rather than abstract models and hence contain all necessary information that can be transformed by the importing software package to their needs. The paper contains a full description of the data marts for network coding, simulation results, and scenario management, which have been discussed with industry partners to ensure sustainability.
Resumo:
Clinical pathways for end-of-life care management are used widely around the world and have been regarded as the gold standard. The aim of this review was to assess the effects of end-of-life care pathways (EOLCP), compared with usual care (no pathway) or with care guided by a different end-of-life care pathway, across all healthcare settings (e.g. hospitals, residential aged care facilities, community). We searched the Cochrane Register of Controlled Trials (CENTRAL), the Pain, Palliative and Supportive Care Review group specialised register, MEDLINE, EMBASE, review articles and reference lists of relevant articles. The search was carried out in September 2009. All randomised controlled trials (RCTs), quasi-randomised trials or high quality controlled before and after studies comparing use versus non-use of an EOLCP in caring for the dying were considered for inclusion. The search identified 920 potentially relevant titles, but no studies met criteria for inclusion in the review. Without further available evidence, recommendations for the use of end-of-life pathways in caring for the dying cannot be made. There are now recent concerns regarding the big scale roll-out of EOLCP despite the lack of evidence, nurses should report any safety concerns or adverse effects associated with such pathways.
Resumo:
Travel time is an important network performance measure and it quantifies congestion in a manner easily understood by all transport users. In urban networks, travel time estimation is challenging due to number of reasons such as, fluctuations in traffic flow due to traffic signals, significant flow to/from mid link sinks/sources, etc. The classical analytical procedure utilizes cumulative plots at upstream and downstream locations for estimating travel time between the two locations. In this paper, we discuss about the issues and challenges with classical analytical procedure such as its vulnerability to non conservation of flow between the two locations. The complexity with respect to exit movement specific travel time is discussed. Recently, we have developed a methodology utilising classical procedure to estimate average travel time and its statistic on urban links (Bhaskar, Chung et al. 2010). Where, detector, signal and probe vehicle data is fused. In this paper we extend the methodology for route travel time estimation and test its performance using simulation. The originality is defining cumulative plots for each exit turning movement utilising historical database which is self updated after each estimation. The performance is also compared with a method solely based on probe (Probe-only). The performance of the proposed methodology has been found insensitive to different route flow, with average accuracy of more than 94% given a probe per estimation interval which is more than 5% increment in accuracy with respect to Probe-only method.
Resumo:
This paper investigates the High Lift System (HLS) application of complex aerodynamic design problem using Particle Swarm Optimisation (PSO) coupled to Game strategies. Two types of optimization methods are used; the first method is a standard PSO based on Pareto dominance and the second method hybridises PSO with a well-known Nash Game strategies named Hybrid-PSO. These optimization techniques are coupled to a pre/post processor GiD providing unstructured meshes during the optimisation procedure and a transonic analysis software PUMI. The computational efficiency and quality design obtained by PSO and Hybrid-PSO are compared. The numerical results for the multi-objective HLS design optimisation clearly shows the benefits of hybridising a PSO with the Nash game and makes promising the above methodology for solving other more complex multi-physics optimisation problems in Aeronautics.
Resumo:
We modified a commercial Hartmann-Shack aberrometer and used it to measure ocular aberrations across the central 42º horizontal x 32º vertical visual fields of five young emmetropic subjects. Some Zernike aberration coefficients show coefficient field distributions that were similar to the field dependence predicted by Seidel theory (astigmatism, oblique astigmatism, horizontal coma, vertical coma), but defocus did not demonstrate such similarity.
Resumo:
Statistical and anecdotal evidence suggests that truancy is a significant problem for Australian schools. This paper considers the efficacy of legislative attempts to curb truancy, focussing in particular on the Queensland experience. Both Queensland legislation and the Commonwealth Improving School Enrolment and Attendance Through Welfare reform Measure (SEAM) pilot program are explained and evaluated. The paper considers in particular the utility of parental responsibility strategies as a response to truancy - under the Education (General Provisions) Act 12006 (Queensland) parents of persistent truants may be prosecuted and fined; under the SEAM initiative parents may have their social security payments suspended. Despite the availability of these seemingly draconian penalties, there is a reluctance, in practice, to hold parents accountable. The paper attempts to explain this reluctance and asks whether parental responsibility legislation can deliver a solution to truancy.