399 resultados para default probability


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complex transition from convict to free labour influenced state intervention in the employment relationship, and initiated the first minimum labour standards in Australia in 1828. Since then, two principal sets of tensions have affected the enforcement of such standards: tensions between government and employers, and tensions between the major political parties over industrial and economic issues. This article argues that these tensions have resulted in a sustained legacy affecting minimum labour standards’ enforcement in Australia. The article outlines broad historical developments and contexts of minimum labour standards’ enforcement in Australia since 1828, with more contemporary exploration focusing specifically on enforcement practices and policies in the Australian federal industrial relations jurisdiction. Current enforcement practices are an outcome of this volatile history, and past influences remain strong.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A critical and Historical account of the cultural and creative industries as a policy discourse. It argues that the emergence of a cultural policy discourse was part of a progressive democratic politics from the 1960s onwards, taking cognizance of the emergence of new kinds of commercial popular culture. It suggests that this period saw the merging of aesthetics and culture in particular ways. The creative industries come from a different source which combined innovation theory, embedded economics and entrepreneurialism in ways that resulted in a much less progressive politics. The chapter ends by suggesting the the idea of the creative industries is now at an end.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rather than catch-up with the West, when it comes to creative industries, China must find its own path.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In an attempt to curb online copyright infringement, copyright owners are increasingly seeking to enlist the assistance of Internet Service Providers (‘ISPs’) to enforce copyright and impose sanctions on their users.1 Commonly termed ‘graduated response’ schemes, these measures generally require that the ISP take some action against users suspected of infringing copyright, ranging from issuing warnings, to collating allegations made against subscribers and reporting to copyright owners, to suspension and eventual termination of service.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are many applications in aeronautics where there exist strong couplings between disciplines. One practical example is within the context of Unmanned Aerial Vehicle(UAV) automation where there exists strong coupling between operation constraints, aerodynamics, vehicle dynamics, mission and path planning. UAV path planning can be done either online or offline. The current state of path planning optimisation online UAVs with high performance computation is not at the same level as its ground-based offline optimizer's counterpart, this is mainly due to the volume, power and weight limitations on the UAV; some small UAVs do not have the computational power needed for some optimisation and path planning task. In this paper, we describe an optimisation method which can be applied to Multi-disciplinary Design Optimisation problems and UAV path planning problems. Hardware-based design optimisation techniques are used. The power and physical limitations of UAV, which may not be a problem in PC-based solutions, can be approached by utilizing a Field Programmable Gate Array (FPGA) as an algorithm accelerator. The inevitable latency produced by the iterative process of an Evolutionary Algorithm (EA) is concealed by exploiting the parallelism component within the dataflow paradigm of the EA on an FPGA architecture. Results compare software PC-based solutions and the hardware-based solutions for benchmark mathematical problems as well as a simple real world engineering problem. Results also indicate the practicality of the method which can be used for more complex single and multi objective coupled problems in aeronautical applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates expectations to sample averages. Second, we show that these structural upper bounds can be loose, compared to previous bounds. In particular, we demonstrate a class for which the expectation of the empirical minimizer decreases as O(1/n) for sample size n, although the upper bound based on structural properties is Ω(1). Third, we show that this looseness of the bound is inevitable: we present an example that shows that a sharp bound cannot be universally recovered from empirical data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the nice properties of kernel classifiers such as SVMs is that they often produce sparse solutions. However, the decision functions of these classifiers cannot always be used to estimate the conditional probability of the class label. We investigate the relationship between these two properties and show that these are intimately related: sparseness does not occur when the conditional probabilities can be unambiguously estimated. We consider a family of convex loss functions and derive sharp asymptotic results for the fraction of data that becomes support vectors. This enables us to characterize the exact trade-off between sparseness and the ability to estimate conditional probabilities for these loss functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in AdaBoost to achieve universal consistency. We show that provided AdaBoost is stopped after n1-ε iterations---for sample size n and ε ∈ (0,1)---the sequence of risks of the classifiers it produces approaches the Bayes risk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we advocate for the continued need for consumer protection and fair trading regulation, even in competitive markets. For the purposes of this paper a ‘competitive market’ is defined as one that has low barriers to entry and exit, with homogenous products and services and numerous suppliers. Whilst competition is an important tool for providing consumer benefits, it will not be sufficient to protect at least some consumers, particularly vulnerable, low income consumers. For this reason, we argue, setting competition as the ‘end goal’ and assuming that consumer protection and consumer benefits will always follow, is a flawed regulatory approach. The ‘end goal’ should surely be consumer protection and fair markets, and a combination of competition law and consumer protection law should be applied in order to achieve those goals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Early models of bankruptcy prediction employed financial ratios drawn from pre-bankruptcy financial statements and performed well both in-sample and out-of-sample. Since then there has been an ongoing effort in the literature to develop models with even greater predictive performance. A significant innovation in the literature was the introduction into bankruptcy prediction models of capital market data such as excess stock returns and stock return volatility, along with the application of the Black–Scholes–Merton option-pricing model. In this note, we test five key bankruptcy models from the literature using an upto- date data set and find that they each contain unique information regarding the probability of bankruptcy but that their performance varies over time. We build a new model comprising key variables from each of the five models and add a new variable that proxies for the degree of diversification within the firm. The degree of diversification is shown to be negatively associated with the risk of bankruptcy. This more general model outperforms the existing models in a variety of in-sample and out-of-sample tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gaussian mixture models (GMMs) have become an established means of modeling feature distributions in speaker recognition systems. It is useful for experimentation and practical implementation purposes to develop and test these models in an efficient manner particularly when computational resources are limited. A method of combining vector quantization (VQ) with single multi-dimensional Gaussians is proposed to rapidly generate a robust model approximation to the Gaussian mixture model. A fast method of testing these systems is also proposed and implemented. Results on the NIST 1996 Speaker Recognition Database suggest comparable and in some cases an improved verification performance to the traditional GMM based analysis scheme. In addition, previous research for the task of speaker identification indicated a similar system perfomance between the VQ Gaussian based technique and GMMs

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This book explores the application of concepts of fiduciary duty or public trust in responding to the policy and governance challenges posed by policy problems that extend over multiple terms of government or even, as in the case of climate change, human generations. The volume brings together a range of perspectives including leading international thinkers on questions of fiduciary duty and public trust, Australia's most prominent judicial advocate for the application of fiduciary duty, top law scholars from several major universities, expert commentary from an influential climate policy think-tank and the views of long-serving highly respected past and present parliamentarians. The book presents a detailed examination of the nature and extent of fiduciary duty, looking at the example of Australia and having regard to developments in comparable jurisdictions. It identifies principles that could improve the accountability of political actors for their responses to major problems that may extend over multiple electoral cycles.