952 resultados para agent-oriented programming
Resumo:
In Moneywood Pty Ltd v Salamon Nominees Pty Ltd 1 the High Court of Australia considered an appeal from the Queensland Court of Appeal in relation to the correct interpretation of s76 (1)(c) Auctioneers and Agents Act 1971 (Qld). In paraphrase, s76(1)(c) provides that a real estate agent shall not be entitled to sue for or recover any commission unless “the engagement or appointment to act as …..real estate agent ….. in respect of such transaction is in writing signed by the person to be charged with such…..commission…..or the person’s agent or representative” (“the statutory requirement”).
Resumo:
There is an urgent need to develop safe, effective, dual-purpose contraceptive agents that combine the prevention of pregnancy with protection against sexually transmitted diseases. Here we report the identification of a group of compounds that on contact with human spermatozoa induce a state of “spermostasis,” characterized by the extremely rapid inhibition of sperm movement without compromising cell viability. These spermostatic agents were more active and significantly less toxic than the reagent in current clinical use, nonoxynol 9, giving therapeutic indices (ratio of spermostatic to cytotoxic activity) that were orders of magnitude greater than this traditional spermicide. Although certain compounds could trigger reactive oxygen species generation by spermatozoa, this activity was not correlated with spermostasis. Rather, the latter was associated with alkylation of two major sperm tail proteins that were identified as A Kinase-Anchoring Proteins (AKAP3 and AKAP4) by mass spectrometry. As a consequence of disrupted AKAP function, the abilities of cAMP to drive protein kinase A-dependent activities in the sperm tail, such as the activation of SRC and the consequent stimulation of tyrosine phosphorylation, were suppressed. Furthermore, analysis of microbicidal activity using Chlamydia muridarum revealed powerful inhibitory effects at the same low micromolar doses that suppressed sperm movement. In this case, the microbicidal action was associated with alkylation of Major Outer Membrane Protein (MOMP), a major chlamydial membrane protein. Taken together, these results have identified for the first time a novel set of cellular targets and chemical principles capable of providing simultaneous defense against both fertility and the spread of sexually transmitted disease.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.
Resumo:
We present an algorithm called Optimistic Linear Programming (OLP) for learning to optimize average reward in an irreducible but otherwise unknown Markov decision process (MDP). OLP uses its experience so far to estimate the MDP. It chooses actions by optimistically maximizing estimated future rewards over a set of next-state transition probabilities that are close to the estimates, a computation that corresponds to solving linear programs. We show that the total expected reward obtained by OLP up to time T is within C(P) log T of the reward obtained by the optimal policy, where C(P) is an explicit, MDP-dependent constant. OLP is closely related to an algorithm proposed by Burnetas and Katehakis with four key differences: OLP is simpler, it does not require knowledge of the supports of transition probabilities, the proof of the regret bound is simpler, but our regret bound is a constant factor larger than the regret of their algorithm. OLP is also similar in flavor to an algorithm recently proposed by Auer and Ortner. But OLP is simpler and its regret bound has a better dependence on the size of the MDP.