932 resultados para Singleton bound
Resumo:
The functional properties of cartilaginous tissues are determined predominantly by the content, distribution, and organization of proteoglycan and collagen in the extracellular matrix. Extracellular matrix accumulates in tissue-engineered cartilage constructs by metabolism and transport of matrix molecules, processes that are modulated by physical and chemical factors. Constructs incubated under free-swelling conditions with freely permeable or highly permeable membranes exhibit symmetric surface regions of soft tissue. The variation in tissue properties with depth from the surfaces suggests the hypothesis that the transport processes mediated by the boundary conditions govern the distribution of proteoglycan in such constructs. A continuum model (DiMicco and Sah in Transport Porus Med 50:57-73, 2003) was extended to test the effects of membrane permeability and perfusion on proteoglycan accumulation in tissue-engineered cartilage. The concentrations of soluble, bound, and degraded proteoglycan were analyzed as functions of time, space, and non-dimensional parameters for several experimental configurations. The results of the model suggest that the boundary condition at the membrane surface and the rate of perfusion, described by non-dimensional parameters, are important determinants of the pattern of proteoglycan accumulation. With perfusion, the proteoglycan profile is skewed, and decreases or increases in magnitude depending on the level of flow-based stimulation. Utilization of a semi-permeable membrane with or without unidirectional flow may lead to tissues with depth-increasing proteoglycan content, resembling native articular cartilage.
Resumo:
This paper develops a general theory of validation gating for non-linear non-Gaussian mod- els. Validation gates are used in target tracking to cull very unlikely measurement-to-track associa- tions, before remaining association ambiguities are handled by a more comprehensive (and expensive) data association scheme. The essential property of a gate is to accept a high percentage of correct associ- ations, thus maximising track accuracy, but provide a su±ciently tight bound to minimise the number of ambiguous associations. For linear Gaussian systems, the ellipsoidal vali- dation gate is standard, and possesses the statistical property whereby a given threshold will accept a cer- tain percentage of true associations. This property does not hold for non-linear non-Gaussian models. As a system departs from linear-Gaussian, the ellip- soid gate tends to reject a higher than expected pro- portion of correct associations and permit an excess of false ones. In this paper, the concept of the ellip- soidal gate is extended to permit correct statistics for the non-linear non-Gaussian case. The new gate is demonstrated by a bearing-only tracking example.
Resumo:
The Tourism, Racing and Fair Trading (Miscellaneous Provisions) Act 2002 (“the Act”) which was passed on 18 April 2002 contains a number of significant amendments relevant to the operation of the Property Agents and Motor Dealers Act 2000. The main changes relevant to property transactions are: (i) Changes to the process for appointment of a real estate agent and consolidation of the appointment forms; (ii) Additions to the disclosure obligation of agents and property developers; (iii) Simplification of the process for commencing the cooling off period; (iv) Alteration of the common law position concerning when the parties are bound by a contract; (v) Removal of the requirement for a seller’s signature on the warning statement to be witnessed; (vi) Retrospective amendment of s 170 of the Body Corporate and Community Management Act 1997; (vii) Inclusion of a new power to allow inspectors to enter the place of business of a licensee or a marketeer without consent and without a warrant; and (viii) Inclusion of a new power for inspectors to require documents to be produced by marketeers. The majority of the amendments are effective from the date of assent, 24 April 2002, however, some of the amendments do not commence until a date fixed by proclamation. No proclamation has been made at the time of writing (2 May 2002). Where the amendments have not commenced this will be noted in the article. Before providing clients with advice, practitioners should carefully check proclamation details.
Resumo:
One of the many difficulties associated with the drafting of the Property Agents and Motor Dealers Act 2000 (Qld) (‘the Act’) is the operation of s 365. If the requirements imposed by this section concerning the return of the executed contract are not complied with, the buyer and the seller will not be bound by the relevant contract and the cooling-off period will not commence. In these circumstances, it is clear that a buyer’s offer may be withdrawn. However, the drafting of the Act creates a difficulty in that the ability of the seller to withdraw from the transaction prior to the parties being bound by the contract is not expressly provided by s 365. On one view, if the buyer is able to withdraw an offer at any time before receiving the prescribed contract documentation the seller also should not be bound by the contract until this time, notwithstanding that the seller may have been bound at common law. However, an alternative analysis is that the legislative omission to provide the seller with a right of withdrawal may be deliberate given the statutory focus on buyer protection. If this analysis were correct the seller would be denied the right to withdraw from the transaction after the contract was formed at common law (that is, after the seller had signed and the fact of signing had been communicated to the buyer).
Resumo:
Against a background of already thin markets in some sectors of major public sector infrastructure in Australia and the desire of the Australian federal government to leverage private finance, concerns about ensuring sufficient levels of competition are prompting federal government to seek new sources of in-bound foreign direct income - as part of attracting more foreign contractors and consortia to bid for Australian public sector major infrastructure. As a first step towards attracting greater overseas interest in the Australian public sector market infrastructure market, an improved understanding of the determinants of multinational contractors’ willingness to bid in this market is offered by Dunning’s eclectic paradigm and which have has been a dominant approach in international business for over 20 years and yet has been little used in the context of international contracting. This paper aims to develop Dunning’s eclectic framework and also gives a brief outline of a research plan to collect secondary data and primary data from international contractors around the globe in pursuance of testing the eclectic framework.
Resumo:
A bioassay technique, based on surface-enhanced Raman scattering (SERS) tagged gold nanoparticles encapsulated with a biotin functionalised polymer, has been demonstrated through the spectroscopic detection of a streptavidin binding event. A methodical series of steps preceded these results: synthesis of nanoparticles which were found to give a reproducible SERS signal; design and synthesis of polymers with RAFT-functional end groups able to encapsulate the gold nanoparticle. The polymer also enabled the attachment of a biotin molecule functionalised so that it could be attached to the hybrid nanoparticle through a modular process. Finally, the demonstrations of a positive bioassay for this model construct using streptavidin/biotin binding. The synthesis of silver and gold nanoparticles was performed by using tri-sodium citrate as the reducing agent. The shape of the silver nanoparticles was quite difficult to control. Gold nanoparticles were able to be prepared in more regular shapes (spherical) and therefore gave a more consistent and reproducible SERS signal. The synthesis of gold nanoparticles with a diameter of 30 nm was the most reproducible and these were also stable over the longest periods of time. From the SERS results the optimal size of gold nanoparticles was found to be approximately 30 nm. Obtaining a consistent SERS signal with nanoparticles smaller than this was particularly difficult. Nanoparticles more than 50 nm in diameter were too large to remain suspended for longer than a day or two and formed a precipitate, rendering the solutions useless for our desired application. Gold nanoparticles dispersed in water were able to be stabilised by the addition of as-synthesised polymers dissolved in a water miscible solvent. Polymer stabilised AuNPs could not be formed from polymers synthesised by conventional free radical polymerization, i.e. polymers that did not possess a sulphur containing end-group. This indicated that the sulphur-containing functionality present within the polymers was essential for the self assembly process to occur. Polymer stabilization of the gold colloid was evidenced by a range of techniques including, visible spectroscopy, transmission electron microscopy, Fourier transform infrared spectroscopy, thermogravimetric analysis and Raman spectroscopy. After treatment of the hybrid nanoparticles with a series of SERS tags, focussing on 2-quinolinethiol the SERS signals were found to have comparable signal intensity to the citrate stabilised gold nanoparticles. This finding illustrates that the stabilization process does not interfere with the ability of gold nanoparticles to act as substrates for the SERS effect. Incorporation of a biotin moiety into the hybrid nanoparticles was achieved through a =click‘ reaction between an alkyne-functionalised polymer and an azido-functionalised biotin analogue. This functionalized biotin was prepared through a 4-step synthesis from biotin. Upon exposure of the surface-bound streptavidin to biotin-functionalised polymer hybrid gold nanoparticles, then washing, a SERS signal was obtained from the 2-quinolinethiol which was attached to the gold nanoparticles (positive assay). After exposure to functionalised polymer hybrid gold nanoparticles without biotin present then washing a SERS signal was not obtained as the nanoparticles did not bind to the streptavidin (negative assay). These results illustrate the applicability of the use of SERS active functional-polymer encapsulated gold nanoparticles for bioassay application.
Resumo:
This paper establishes practical stability results for an important range of approximate discrete-time filtering problems involving mismatch between the true system and the approximating filter model. Using local consistency assumption, the practical stability established is in the sense of an asymptotic bound on the amount of bias introduced by the model approximation. Significantly, these practical stability results do not require the approximating model to be of the same model type as the true system. Our analysis applies to a wide range of estimation problems and justifies the common practice of approximating intractable infinite dimensional nonlinear filters by simpler computationally tractable filters.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
We consider the problem of prediction with expert advice in the setting where a forecaster is presented with several online prediction tasks. Instead of competing against the best expert separately on each task, we assume the tasks are related, and thus we expect that a few experts will perform well on the entire set of tasks. That is, our forecaster would like, on each task, to compete against the best expert chosen from a small set of experts. While we describe the “ideal” algorithm and its performance bound, we show that the computation required for this algorithm is as hard as computation of a matrix permanent. We present an efficient algorithm based on mixing priors, and prove a bound that is nearly as good for the sequential task presentation case. We also consider a harder case where the task may change arbitrarily from round to round, and we develop an efficient approximate randomized algorithm based on Markov chain Monte Carlo techniques.
Resumo:
Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst-case assumptions about the attacker: we grant the attacker complete knowledge of the defender’s strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker’s incentives and knowledge.
Resumo:
Recent research on multiple kernel learning has lead to a number of approaches for combining kernels in regularized risk minimization. The proposed approaches include different formulations of objectives and varying regularization strategies. In this paper we present a unifying optimization criterion for multiple kernel learning and show how existing formulations are subsumed as special cases. We also derive the criterion’s dual representation, which is suitable for general smooth optimization algorithms. Finally, we evaluate multiple kernel learning in this framework analytically using a Rademacher complexity bound on the generalization error and empirically in a set of experiments.
Resumo:
We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.
Resumo:
H. Simon and B. Szörényi have found an error in the proof of Theorem 52 of “Shifting: One-inclusion mistake bounds and sample compression”, Rubinstein et al. (2009). In this note we provide a corrected proof of a slightly weakened version of this theorem. Our new bound on the density of one-inclusion hypergraphs is again in terms of the capacity of the multilabel concept class. Simon and Szörényi have recently proved an alternate result in Simon and Szörényi (2009).
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.