989 resultados para Prove
Resumo:
Association rule mining has contributed to many advances in the area of knowledge discovery. However, the quality of the discovered association rules is a big concern and has drawn more and more attention recently. One problem with the quality of the discovered association rules is the huge size of the extracted rule set. Often for a dataset, a huge number of rules can be extracted, but many of them can be redundant to other rules and thus useless in practice. Mining non-redundant rules is a promising approach to solve this problem. In this paper, we first propose a definition for redundancy, then propose a concise representation, called a Reliable basis, for representing non-redundant association rules. The Reliable basis contains a set of non-redundant rules which are derived using frequent closed itemsets and their generators instead of using frequent itemsets that are usually used by traditional association rule mining approaches. An important contribution of this paper is that we propose to use the certainty factor as the criterion to measure the strength of the discovered association rules. Using this criterion, we can ensure the elimination of as many redundant rules as possible without reducing the inference capacity of the remaining extracted non-redundant rules. We prove that the redundancy elimination, based on the proposed Reliable basis, does not reduce the strength of belief in the extracted rules. We also prove that all association rules, their supports and confidences, can be retrieved from the Reliable basis without accessing the dataset. Therefore the Reliable basis is a lossless representation of association rules. Experimental results show that the proposed Reliable basis can significantly reduce the number of extracted rules. We also conduct experiments on the application of association rules to the area of product recommendation. The experimental results show that the non-redundant association rules extracted using the proposed method retain the same inference capacity as the entire rule set. This result indicates that using non-redundant rules only is sufficient to solve real problems needless using the entire rule set.
Resumo:
As detailed in Whitehead, Bunker and Chung (2011), a congestion-charging scheme provides a mechanism to combat congestion whilst simultaneously generating revenue to improve both the road and public transport networks. The aim of this paper is to assess the feasibility of implementing a congestion-charging scheme in the city of Brisbane in Australia and determine the potential effects of this initiative. In order to so, a congestion-charging scheme was designed for Brisbane and modelled using the Brisbane Strategic Transport Model with a base line year of 2026. This paper argues that the implementation of this initiative would prove to be effective in reducing the cities road congestion and increasing the overall sustainability of the region.
Resumo:
The contributions of this thesis fall into three areas of certificateless cryptography. The first area is encryption, where we propose new constructions for both identity-based and certificateless cryptography. We construct an n-out-of- n group encryption scheme for identity-based cryptography that does not require any special means to generate the keys of the trusted authorities that are participating. We also introduce a new security definition for chosen ciphertext secure multi-key encryption. We prove that our construction is secure as long as at least one authority is uncompromised, and show that the existing constructions for chosen ciphertext security from identity-based encryption also hold in the group encryption case. We then consider certificateless encryption as the special case of 2-out-of-2 group encryption and give constructions for highly efficient certificateless schemes in the standard model. Among these is the first construction of a lattice-based certificateless encryption scheme. Our next contribution is a highly efficient certificateless key encapsulation mechanism (KEM), that we prove secure in the standard model. We introduce a new way of proving the security of certificateless schemes based that are based on identity-based schemes. We leave the identity-based part of the proof intact, and just extend it to cover the part that is introduced by the certificateless scheme. We show that our construction is more efficient than any instanciation of generic constructions for certificateless key encapsulation in the standard model. The third area where the thesis contributes to the advancement of certificateless cryptography is key agreement. Swanson showed that many certificateless key agreement schemes are insecure if considered in a reasonable security model. We propose the first provably secure certificateless key agreement schemes in the strongest model for certificateless key agreement. We extend Swanson's definition for certificateless key agreement and give more power to the adversary. Our new schemes are secure as long as each party has at least one uncompromised secret. Our first construction is in the random oracle model and gives the adversary slightly more capabilities than our second construction in the standard model. Interestingly, our standard model construction is as efficient as the random oracle model construction.
Resumo:
Histories of representation of Blackness are quite distinct in Australia and in America. Indigenous Australian identities have been consistently 'fatal', in Baudrillard's use of that term. So, while Black American representation includes intensely banal images of middle-class, materialistic individuals, such histories are largely absent in the Australian context. This implies that the few such representations which do occur — and particularly those of everyday game shows such as Sale of the Century and Family Feud — are particularly important for presenting a trivial, unexciting version of Aboriginality. This also clarifies the distinction between American and Australian versions of Blackness, and suggests that the latter set of representations might be more usefully viewed in relation to Native American rather than Black American images. The status of indigeneity might prove to be more relevant to Australian Aboriginal representation than the previously favoured identity of skin colour (Blackness).
Resumo:
The requirement to prove a society united by a body of law and customs to establish native title rights has been identified as a major hurdle to achieving native title recognition. The recent appeal decision of the Federal Court in Sampi on behalf of the Bardi and Jawi People v Western Australia [2010] opens the potential for a new judicial interpretation of society based on the internal view of native title claimants. The decision draws on defining features of legal positivism to inform the court’s findings as to the existence of a single Bardi Jawi society of ‘one people’ living under ‘one law’. The case of Bodney v Bennell [2008] is analysed through comparitive study of how the application of the received positivist framework may limit native title recognition. This paper argues that the framing of Indigenous law by reference to Western legal norms is problematic due to the assumptions of legal positivism and that an internal view based on Indigenous worldviews, which see law as intrinsically linked to the spiritual and ancestral connection to country, is more appropriate to determine proof in native title claims.
Resumo:
In this paper we consider the case of large cooperative communication systems where terminals use the protocol known as slotted amplify-and-forward protocol to aid the source in its transmission. Using the perturbation expansion methods of resolvents and large deviation techniques we obtain an expression for the Stieltjes transform of the asymptotic eigenvalue distribution of a sample covariance random matrix of the type HH† where H is the channel matrix of the transmission model for the transmission protocol we consider. We prove that the resulting expression is similar to the Stieltjes transform in its quadratic equation form for the Marcenko-Pastur distribution.
Resumo:
The twists and turns in the ongoing development of the implied common law good faith obligation in the commercial contractual arena continue to prove fertile academic ground. Despite a lack of guidance from the High Court, the lower courts have been besieged by claims based, in part, on the implied obligation. Although lower court authority is lacking consistency and the ‘decisions in which lower courts have recognised the legitimacy of implication of a term of good faith vary in their suggested rationales’, the implied obligation may provide some comfort to a party to ‘at least some commercial contracts’ faced with a contractual counterpart exhibiting symptoms of bad faith.
Resumo:
My aim in this paper is to challenge the increasingly common view in the literature that the law on end of life decision making is in disarray and is in need of urgent reform. My argument is that this assessment of the law is based on assumptions about the relationship between the identity of the defendant and their conduct, and about the nature of causation, which, on examination, prove to be indefensible. I then provide a clarification of the relationship between causation and omissions which proves that the current legal position does not need modification, at least on the grounds that are commonly advanced for the converse view. This enables me, in conclusion, to clarify important conceptual and moral differences between withholding, refusing and withdrawing life-sustaining measures on the one hand, and assisted suicide and euthanasia, on the other.
Resumo:
Abstract This study investigated depressive symptom and interpersonal relatedness outcomes from eight sessions of manualized narrative therapy for 47 adults with major depressive disorder. Post-therapy, depressive symptom improvement (d=1.36) and proportions of clients achieving reliable improvement (74%), movement to the functional population (61%), and clinically significant improvement (53%) were comparable to benchmark research outcomes. Post-therapy interpersonal relatedness improvement (d=.62) was less substantial than for symptoms. Three-month follow-up found maintenance of symptom, but not interpersonal gains. Benchmarking and clinical significance analyses mitigated repeated measure design limitations, providing empirical evidence to support narrative therapy for adults with major depressive disorder. RÉSUMÉ Cette étude a investigué les symptômes dépressifs et les relations interpersonnels d'une thérapie narrative en huit séances chez 47 adultes souffrant d'un trouble dépressif majeur. Après la thérapie, l'amélioration des symptômes dépressifs (d=1.36) et la proportion de clients atteignant un changement significatif (74%), le mouvement vers la population fonctionnelle (61%), enfin l'amélioration clinique significative (53%) étaient comparables aux performances des études de résultats. L'amélioration des relations interpersonnelles (d=0.62) était inférieure à l'amélioration symptomatique. Le suivi à trois mois montrait un maintien des gains symptomatiques mais pas pour les relations interpersonnelles. L’évaluation des performances et les analyses de significativité clinique modèrent les limitations du plan de recherche à mesures répétées et apportent une preuve empirique qui étaie l'efficacité des thérapies narratives pour des adultes avec un trouble dépressif majeur. Este estudo investigou sintomas depressivos e resultados interpessoais relacionados em oito sessões de terapia narrativa manualizada para 47 adultos com perturbação depressiva major. No pós terapia, melhoria de sintomas depressivos (d=1,36) e proporção de clientes que alcançam melhoria válida (74%), movimento para a população funcional (61%) e melhoria clinicamente significativa (53%) foram comparáveis com os resultados da investigação reportados. As melhorias pós terapia nos resultados interpessoais relacionados (d=.62) foi menos substancial do que para os sintomas. Aos três meses de seguimento houve a manutenção dos sintomas mas não dos ganhos interpessoais. As análises de benchemarking e de melhoria clinicamente significativas atenuam as limitações de um design de medidas repetidas, fornecendo evidência empírica para a terapia narrativa para adultos com perturbação depressiva major. Questo lavoro ha valutato i sintomi depressivi e gli outcome nella capacità di relazionarsi a livello interpersonale in 8 sedute di psicoterapia narrativa manualizzata in un gruppo di 47 adulti con depressione maggiore. I risultati ottenuti relativamente a: post terapy, miglioramento dei sintomi depressivi (d_1.36), proporzione di pazienti che hanno raggiunto un miglioramento affidabile e consistente (74%), movimento verso il funzionamento atteso nella popolazione (61%) e miglioramento clinicamente significativo (53%) sono paragonabili ai valori di riferimento della ricerca sull'outcome. I miglioramento della capacità di relazionarsi valutata alla fine del trattamento (d_.62) si è rivelata meno sostanziale rispetto ai sintomi. Un follow-up dopo 3 mesi ha dimostrato che il miglioramento sintomatologico è stato mantenuto, ma non quello degli obiettivi interpersonali. Valori di riferimento e analisi della significatività clinica hanno fatto fronte ai limiti del disegno a misure ripetute, offrendo prove empiriche sulla rilevanza della terapia narrativa in pazienti adulti con depressione maggiore
Resumo:
Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter β ∈ [0,1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter β is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. ©2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.
Resumo:
We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and Gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes. We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.
Resumo:
Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in the parameters of a Markov Decision Process (MDP). Unlike the case of an MDP, the notion of an optimal policy for a BMDP is not entirely straightforward. We consider two notions of optimality based on optimistic and pessimistic criteria. These have been analyzed for discounted BMDPs. Here we provide results for average reward BMDPs. We establish a fundamental relationship between the discounted and the average reward problems, prove the existence of Blackwell optimal policies and, for both notions of optimality, derive algorithms that converge to the optimal value function.
Resumo:
We consider the problem of prediction with expert advice in the setting where a forecaster is presented with several online prediction tasks. Instead of competing against the best expert separately on each task, we assume the tasks are related, and thus we expect that a few experts will perform well on the entire set of tasks. That is, our forecaster would like, on each task, to compete against the best expert chosen from a small set of experts. While we describe the “ideal” algorithm and its performance bound, we show that the computation required for this algorithm is as hard as computation of a matrix permanent. We present an efficient algorithm based on mixing priors, and prove a bound that is nearly as good for the sequential task presentation case. We also consider a harder case where the task may change arbitrarily from round to round, and we develop an efficient approximate randomized algorithm based on Markov chain Monte Carlo techniques.
Resumo:
Online learning algorithms have recently risen to prominence due to their strong theoretical guarantees and an increasing number of practical applications for large-scale data analysis problems. In this paper, we analyze a class of online learning algorithms based on fixed potentials and nonlinearized losses, which yields algorithms with implicit update rules. We show how to efficiently compute these updates, and we prove regret bounds for the algorithms. We apply our formulation to several special cases where our approach has benefits over existing online learning methods. In particular, we provide improved algorithms and bounds for the online metric learning problem, and show improved robustness for online linear prediction problems. Results over a variety of data sets demonstrate the advantages of our framework.
Resumo:
A number of learning problems can be cast as an Online Convex Game: on each round, a learner makes a prediction x from a convex set, the environment plays a loss function f, and the learner’s long-term goal is to minimize regret. Algorithms have been proposed by Zinkevich, when f is assumed to be convex, and Hazan et al., when f is assumed to be strongly convex, that have provably low regret. We consider these two settings and analyze such games from a minimax perspective, proving minimax strategies and lower bounds in each case. These results prove that the existing algorithms are essentially optimal.