121 resultados para Uniformly Convex
Resumo:
Australian privacy law regulates how government agencies and private sector organisations collect, store and use personal information. A coherent conceptual basis of personal information is an integral requirement of information privacy law as it determines what information is regulated. A 2004 report conducted on behalf of the UK’s Information Commissioner (the 'Booth Report') concluded that there was no coherent definition of personal information currently in operation because different data protection authorities throughout the world conceived the concept of personal information in different ways. The authors adopt the models developed by the Booth Report to examine the conceptual basis of statutory definitions of personal information in Australian privacy laws. Research findings indicate that the definition of personal information is not construed uniformly in Australian privacy laws and that different definitions rely upon different classifications of personal information. A similar situation is evident in a review of relevant case law. Despite this, the authors conclude the article by asserting that a greater jurisprudential discourse is required based on a coherent conceptual framework to ensure the consistent development of Australian privacy law.
Resumo:
This study investigated a novel drug delivery system (DDS), consisting of polycaprolactone (PCL) or polycaprolactone 20% tricalcium phosphate (PCL-TCP) biodegradable scaffolds, fibrin Tisseel sealant and recombinant bone morphogenetic protein-2 (rhBMP-2) for bone regeneration. PCL and PCL-TCP-fibrin composites displayed a loading efficiency of 70% and 43%, respectively. Fluorescence and scanning electron microscopy revealed sparse clumps of rhBMP-2 particles, non-uniformly distributed on the rods’ surface of PCL-fibrin composites. In contrast, individual rhBMP-2 particles were evident and uniformly distributed on the rods’ surface of the PCL-TCP-fibrin composites. PCL-fibrin composites loaded with 10 and 20 μg/ml rhBMP-2 demonstrated a triphasic release profile as quantified by an enzyme-linked immunosorbent assay (ELISA). This consisted of burst releases at 2 h, and days 7 and 16. A biphasic release profile was observed for PCL-TCP-fibrin composites loaded with 10 μg/ml rhBMP-2, consisting of burst releases at 2 h and day 14. PCL-TCP-fibrin composites loaded with 20 μg/ml rhBMP-2 showed a tri-phasic release profile, consisting of burst releases at 2 h, and days 10 and 21. We conclude that the addition of TCP caused a delay in rhBMP-2 release. Sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) and alkaline phosphatase assay verified the stability and bioactivity of eluted rhBMP-2 at all time points
Resumo:
An investigation of cylindrical iron rods burning in pressurised oxygen under microgravity conditions is presented. It has been shown that, under similar experimental conditions, the melting rate of a burning, cylindrical iron rod is higher in microgravity than in normal gravity by a factor of 1.8 ± 0.3. This paper presents microanalysis of quenched samples obtained in a microgravity environment in a 2.0 s duration drop tower facility in Brisbane, Australia. These images indicate that the solid/liquid interface is highly convex in reduced gravity, compared to the planar geometry typically observed in normal gravity, which increases the contact area between liquid and solid phases by a factor of 1.7 ± 0.1. Thus, there is good agreement between the proportional increase in solid/liquid interface surface area and melting rate in microgravity. This indicates that the cause of the increased melting rates for cylindrical iron rods burning in microgravity is altered interfacial geometry at the solid/liquid interface.
Resumo:
Engineered tissue grafts, which mimic the spatial variations of cell density and extracellular matrix present in native tissues, could facilitate more efficient tissue regeneration and integration. We previously demonstrated that cells could be uniformly seeded throughout a 3D scaffold having a random pore architecture using a perfusion bioreactor2. In this work, we aimed to generate 3D constructs with defined cell distributions based on rapid prototyped scaffolds manufactured with a controlled gradient in porosity. Computational models were developed to assess the influence of fluid flow, associated with pore architecture and perfusion regime, on the resulting cell distribution.
Resumo:
Stereotypes of salespeople are common currency in US media outlets and research suggests that these stereotypes are uniformly negative. However, there is no reason to expect that stereotypes will be consistent across cultures. The present paper provides the first empirical examination of salesperson stereotypes in an Asian country, specifically Taiwan. Using accepted psychological methods, Taiwanese salesperson stereotypes are found to be twofold, with a negative stereotype being quite congruent with existing US stereotypes, but also a positive stereotype, which may be related to the specific culture of Taiwan.
Resumo:
In the UK, Singapore, Canada, New Zealand and Australia, as in many other jurisdictions, charity law is rooted in the common law and anchored on the Statute of Charitable Uses 1601. The Pemsel classification of charitable purposes was uniformly accepted, and together with a shared and growing pool of judicial precedents, aided by the ‘spirit and intendment’ rule, has subsequently allowed the law to develop along much the same lines. In recent years, all the above jurisdictions have embarked on law reform processes designed to strengthen regulatory processes and to statutorily define and encode common law concepts. The reform outcomes are now to be found in a batch of national charity statutes which reflect interesting differences in the extent to which their respective governments have been prepared to balance the modernising of charitable purposes and other common law concepts alongside the customary concern to tighten the regulatory framework.
Resumo:
For fruit flies, fully ripe fruit is preferred for adult oviposition and is superior for offspring performance over unripe or ripening fruit. Because not all parts of a single fruit ripen simultaneously, the opportunity exists for adult fruit flies to selectively choose riper parts of a fruit for oviposition and such selection, if it occurs, could positively influence offspring performance. Such fine scale host variation is rarely considered in fruit fly ecology, however, especially for polyphagous species which are, by definition, considered to be generalist host users. Here we study the adult oviposition preference/larval performance relationship of the Oriental fruit fly, Bactrocera dorsalis (Hendel) (Diptera: Tephritidae), a highly polyphagous pest species, at the “within-fruit” level to see if such a host use pattern occurs. We recorded the number of oviposition attempts that female flies made into three fruit portions (top, middle and bottom), and larval behavior and development within different fruit portions for ripening (color change) and fully-ripe mango, Mangifera indica L. (Anacardiaceae). Results indicate that female B. dorsalis do not oviposit uniformly across a mango fruit, but lay most often in the top (i.e., stalk end) of fruit and least in the bottom portion, regardless of ripening stage. There was no evidence of larval feeding site preference or performance (development time, pupal weight, percent pupation) being influenced by fruit portion, within or across the fruit ripening stages. There was, however, a very significant effect on adult emergence rate from pupae, with adult emergence rate from pupae from the bottom of ripening mango being approximately only 50% of the adult emergence rate from the top of ripening fruit, or from both the top and bottom of fully-ripe fruit. Differences in mechanical (firmness) and chemical (total soluble solids, titratable acidity, total non-structural carbohydrates) traits between different fruit portions were correlated with adult fruit utilisation. Our results support a positive adult preference/offspring performance relationship at within-fruit level for B. dorsalis. The fine level of host discrimination exhibited by B. dorsalis is at odds with the general perception that, as a polyphagous herbivore, the fly should show very little discrimination in its host use behavior.
Resumo:
The success rate of carrier phase ambiguity resolution (AR) is the probability that the ambiguities are successfully fixed to their correct integer values. In existing works, an exact success rate formula for integer bootstrapping estimator has been used as a sharp lower bound for the integer least squares (ILS) success rate. Rigorous computation of success rate for the more general ILS solutions has been considered difficult, because of complexity of the ILS ambiguity pull-in region and computational load of the integration of the multivariate probability density function. Contributions of this work are twofold. First, the pull-in region mathematically expressed as the vertices of a polyhedron is represented by a multi-dimensional grid, at which the cumulative probability can be integrated with the multivariate normal cumulative density function (mvncdf) available in Matlab. The bivariate case is studied where the pull-region is usually defined as a hexagon and the probability is easily obtained using mvncdf at all the grid points within the convex polygon. Second, the paper compares the computed integer rounding and integer bootstrapping success rates, lower and upper bounds of the ILS success rates to the actual ILS AR success rates obtained from a 24 h GPS data set for a 21 km baseline. The results demonstrate that the upper bound probability of the ILS AR probability given in the existing literatures agrees with the actual ILS success rate well, although the success rate computed with integer bootstrapping method is a quite sharp approximation to the actual ILS success rate. The results also show that variations or uncertainty of the unit–weight variance estimates from epoch to epoch will affect the computed success rates from different methods significantly, thus deserving more attentions in order to obtain useful success probability predictions.
Resumo:
Detection of Region of Interest (ROI) in a video leads to more efficient utilization of bandwidth. This is because any ROIs in a given frame can be encoded in higher quality than the rest of that frame, with little or no degradation of quality from the perception of the viewers. Consequently, it is not necessary to uniformly encode the whole video in high quality. One approach to determine ROIs is to use saliency detectors to locate salient regions. This paper proposes a methodology for obtaining ground truth saliency maps to measure the effectiveness of ROI detection by considering the role of user experience during the labelling process of such maps. User perceptions can be captured and incorporated into the definition of salience in a particular video, taking advantage of human visual recall within a given context. Experiments with two state-of-the-art saliency detectors validate the effectiveness of this approach to validating visual saliency in video. This paper will provide the relevant datasets associated with the experiments.
Resumo:
We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary. Peter L. Bartlett, Alexander Rakhlin
Resumo:
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of complexity. The estimates we establish give optimal rates and are based on a local and empirical version of Rademacher averages, in the sense that the Rademacher averages are computed from the data, on a subset of functions with small empirical error. We present some applications to classification and prediction with convex function classes, and with kernel classes in particular.
Resumo:
We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.