994 resultados para Semiparametric efficiency bounds
Resumo:
In Australia, trials conducted as 'electronic trials' have ordinarily run with the assistance of commercial service providers, with the associated costs being borne by the parties. However, an innovative approach has been taken by the courts in Queensland. In October 2007 Queensland became the first Australian jurisdiction to develop its own court-provided technology, to facilitate the conduct of an electronic trial. This technology was first used in the conduct of civil trials. The use of the technology in the civil sphere highlighted its benefits and, more significantly, demonstrated the potential to achieve much greater efficiencies. The Queensland courts have now gone further, using the court-provided technology in the high proffle criminal trial of R v Hargraves, Hargraves and Stoten, in which the three accused were tried for conspiracy to defraud the Commonwealth of Australia of about $3.7 million in tax. This paper explains the technology employed in this case and reports on the perspectives of all of the participants in the process. The representatives for all parties involved in this trial acknowledged, without reservation, that the use of the technology at trial produced considerable overall efficiencies and costs savings. The experience in this trial also demonstrates that the benefits of trial technology for the criminal justice process are greater than those for civil litigation. It shows that, when skilfully employed, trial technology presents opportunities to enhance the fairness of trials for accused persons. The paper urges governments, courts and the judiciary in all jurisdictions to continue their efforts to promote change, and to introduce mechanisms to facilitate more broadly a shift from the entrenched paper-based approach to both criminal and civil procedure to one which embraces more broadly the enormous benefits trial technology has to offer.
Resumo:
In this study we propose a virtual index for measuring the relative innovativeness of countries. Using a multistage virtual benchmarking process, the best and rational benchmark is extracted for inefficient ISs. Furthermore, Tobit and Ordinary Least Squares (OLS) regression models are used to investigate the likelihood of changes in inefficiencies by investigating country-specific factors. The empirical results relating to the virtual benchmarking process suggest that the OLS regression model would better explain changes in the performance of innovation- inefficient countries.
Resumo:
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.
Resumo:
We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and Gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes. We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.
Resumo:
H. Simon and B. Szörényi have found an error in the proof of Theorem 52 of “Shifting: One-inclusion mistake bounds and sample compression”, Rubinstein et al. (2009). In this note we provide a corrected proof of a slightly weakened version of this theorem. Our new bound on the density of one-inclusion hypergraphs is again in terms of the capacity of the multilabel concept class. Simon and Szörényi have recently proved an alternate result in Simon and Szörényi (2009).
Resumo:
A number of learning problems can be cast as an Online Convex Game: on each round, a learner makes a prediction x from a convex set, the environment plays a loss function f, and the learner’s long-term goal is to minimize regret. Algorithms have been proposed by Zinkevich, when f is assumed to be convex, and Hazan et al., when f is assumed to be strongly convex, that have provably low regret. We consider these two settings and analyze such games from a minimax perspective, proving minimax strategies and lower bounds in each case. These results prove that the existing algorithms are essentially optimal.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercube—one-inclusion graph. The first main result of this report is a density bound of n∙choose(n-1,≤d-1)/choose(n,≤d) < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d-contractible simplicial complexes, extending the well-known characterization that d=1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout
Resumo:
This study determines whether the inclusion of low-cost airlines in a dataset of international and domestic airlines has an impact on the efficiency scores of so-called ‘prestigious’ purportedly ‘efficient’ airlines. This is because while many airline studies concern efficiency, none has truly included a combination of international, domestic and budget airlines. The present study employs the nonparametric technique of data envelopment analysis (DEA) to investigate the technical efficiency of 53 airlines in 2006. The findings reveal that the majority of budget airlines are efficient relative to their more prestigious counterparts. Moreover, most airlines identified as inefficient are so largely because of the overutilization of non-flight assets.
Resumo:
The current study was motivated by statements made by the Economic Strategies Committee that Singapore’s recent productivity levels in services were well below countries such as the US, Japan and Hong Kong. Massive employment of foreign workers was cited as the reason for poor productivity levels. To shed more light on Singapore’s falling productivity, a nonparametric Malmquist productivity index was employed which provides measures of productivity change, technical change and efficiency change. The findings reveal that growth in total factor productivity was attributed to technical change with no improvement in efficiency change. Such results suggest that gains from TFP were input-driven rather than from a ‘best-practice’ approach such as improvements in operations or better resource allocation.
Resumo:
This paper presents the method and results of a survey of 27 of the 33 Australian universities teaching engineering education in late 2007, undertaken by The Natural Edge Project (hosted by Griffith University and the Australian National University) and supported by the National Framework for Energy Efficiency. This survey aimed to ascertain the extent of energy efficiency (EE) education, and to identify preferred methods to assist in increasing the extent to which EE education is embedded in engineering curriculum. In this paper the context for the survey is supported by a summary of the key results from a variety of surveys undertaken over the last decade internationally. The paper concludes that EE education across universities and engineering disciplines in Australia is currently highly variable and ad hoc. Based on the results of the survey, this paper highlights a number of preferred options to support educators to embed sustainability within engineering programs, and future opportunities for monitoring EE, within the context of engineering education for sustainable development (EESD).
Resumo:
In this paper, we investigate theoretically and numerically the efficiency of energy coupling from a plasmon generated by a grating coupler at one of the interfaces of a metal wedge into the plasmonic eigenmode (i.e., symmetric or quasisymmetric plasmon) experiencing nanofocusing in the wedge. Thus the energy efficiency of energy coupling into metallic nanofocusing structure is analyzed. Two different nanofocusing structures with the metal wedge surrounded by a uniform dielectric (symmetric structure) and with the metal wedge enclosed between a substrate and a cladding with different dielectricpermittivities (asymmetric structure) are considered by means of the geometrical optics (adiabatic) approximation. It is demonstrated that the efficiency of the energy coupling from the plasmon generated by the grating into the symmetric or quasisymmetric plasmon experiencing nanofocusing may vary between ∼50% to ∼100%. In particular, even a very small difference (of ∼1%–2%) between the permittivities of the substrate and the cladding may result in a significant increase in the efficiency of the energy coupling (from ∼50% up to ∼100%) into the plasmon experiencing nanofocusing. Distinct beat patterns produced by the interference of the symmetric (quasisymmetric) and antisymmetric (quasiantisymmetric) plasmons are predicted and analyzed with significant oscillations of the magnetic and electric field amplitudes at both the metal wedge interfaces. Physical interpretations of the predicted effects are based upon the behavior, dispersion, and dissipation of the symmetric (quasisymmetric) and antisymmetric (quasiantisymmetric) filmplasmons in the nanofocusing metal wedge. The obtained results will be important for optimizing metallic nanofocusing structures and minimizing coupling and dissipative losses.