927 resultados para Equivalence Proof
Resumo:
This article focuses on mental health assessment of refugees in clinical, educational and administrative-legal settings in order to synthesise research and practice designed to enhance and promote further development of culturally appropriate clinical assessment services during the refugee resettlement process. It specifically surveys research published over the last 25 years into the development, reliability measurement and validity testing of assessment instruments, which have been used with children, adolescents and adults from refugee backgrounds, prior to or following their arrival in a resettlement country, to determine whether the instruments meet established crosscultural standards of conceptual, functional, linguistic, technical and normative equivalence. The findings suggest that, although attempts have been made to develop internally reliable, appropriately normed tests for use with refugees from diverse cultural and linguistic backgrounds, matters of conceptual and linguistic equivalence and test–retest reliability are often overlooked. Implications of these oversights for underreporting refugees' mental health needs are considered. Efforts should also be directed towards development of culturally comparable, valid and reliable measures of refugee children's mental health and of refugee children's and adults' psychoeducational, neuropsychological and applied memory capabilities.
Resumo:
Aim: This paper reports a study designed to assess the psychometric properties (validity and reliability) of a Turkish version of the Australian Parents’ Fever Management Scale (PFMS). Background: Little is known about childhood fever management among Turkish parents. No scales to measure parents’ fever management practices in Turkey are available. Design: This is a methodological study. Methods: Eighty parents, of febrile children aged six months to five years, were randomly selected from the paedaitric hospital and two community family health centers in Sakarya, Turkey. The PFMS was back translated; language equivalence and content validity were validated. PFMS and socio-demographic data were collected in 2009. Means and standard deviations were calculated for interval level data and p values greater than 0.05 were considered statistically significant. Unrotated principal component analysis was used to determine construct validity and Cronbach’s coefficient alpha determined the internal consistency reliability. Results: The PFMS was psychometrically sound in this population. Construct validity, confirmed by confirmatory factor analysis [KMO 0.812, Bartlett’s Specificity (χ² = 182.799, df=28, P < 0·001)] revealed the Turkish version to be comprised of the eight original PFMS items. Internal consistency reliability coefficient was 0.80 and the scale’s total-item correlation coefficients ranged from 0.15 to 0.66 and were significant (p<0.001). Interestingly parents reported high scores on the PFMS 34.52±4.60 (range 8-40 with 40 indicating a high burden of care for febrile children). Conclusion: The PFMS was as psychometrically robust in a Turkish population as in an Australian population and is, therefore, a useful tool for health professionals to identify parents’ practices, provide targeted education thereby in reducing the unnecessary burden of care they place on themselves when caring for a febrile child. Relevance to clinical practice. Testing in different populations, cultures and healthcare systems will further assist in reporting the PFMS usefulness in clinical practice and research.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.
Resumo:
H. Simon and B. Szörényi have found an error in the proof of Theorem 52 of “Shifting: One-inclusion mistake bounds and sample compression”, Rubinstein et al. (2009). In this note we provide a corrected proof of a slightly weakened version of this theorem. Our new bound on the density of one-inclusion hypergraphs is again in terms of the capacity of the multilabel concept class. Simon and Szörényi have recently proved an alternate result in Simon and Szörényi (2009).
Resumo:
The paper "the importance of convexity in learning with squared loss" gave a lower bound on the sample complexity of learning with quadratic loss using a nonconvex function class. The proof contains an error. We show that the lower bound is true under a stronger condition that holds for many cases of interest.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercube—one-inclusion graph. The first main result of this report is a density bound of n∙choose(n-1,≤d-1)/choose(n,≤d) < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d-contractible simplicial complexes, extending the well-known characterization that d=1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout
Resumo:
In this paper we investigate the heuristic construction of bijective s-boxes that satisfy a wide range of cryptographic criteria including algebraic complexity, high nonlinearity, low autocorrelation and have none of the known weaknesses including linear structures, fixed points or linear redundancy. We demonstrate that the power mappings can be evolved (by iterated mutation operators alone) to generate bijective s-boxes with the best known tradeoffs among the considered criteria. The s-boxes found are suitable for use directly in modern encryption algorithms.
Resumo:
We present an algorithm called Optimistic Linear Programming (OLP) for learning to optimize average reward in an irreducible but otherwise unknown Markov decision process (MDP). OLP uses its experience so far to estimate the MDP. It chooses actions by optimistically maximizing estimated future rewards over a set of next-state transition probabilities that are close to the estimates, a computation that corresponds to solving linear programs. We show that the total expected reward obtained by OLP up to time T is within C(P) log T of the reward obtained by the optimal policy, where C(P) is an explicit, MDP-dependent constant. OLP is closely related to an algorithm proposed by Burnetas and Katehakis with four key differences: OLP is simpler, it does not require knowledge of the supports of transition probabilities, the proof of the regret bound is simpler, but our regret bound is a constant factor larger than the regret of their algorithm. OLP is also similar in flavor to an algorithm recently proposed by Auer and Ortner. But OLP is simpler and its regret bound has a better dependence on the size of the MDP.
Resumo:
"Bouncing Back: Resilient Design for Brisbane" was an opportunity for QUT students to communicate their inspiring design responses to adversity, to the larger Brisbane community. The exhibition demonstrates new and innovative ways of thinking about our cities, and how they are built to be resilient and to suit extreme environmental conditions. The challenge for architecture students is to address the state of architecture as a reflection of today's world and to consider how design fits into the 21st century. Students have explored notions of 'Urban Resilience' from multiple perspectives, including emergency design while facing flooding, flood proof housing and urban designs.
Resumo:
It was reported that the manuscript of Crash was returned to the publisher with a note reading ‘The author is beyond psychiatric help’. Ballard took the lay diagnosis as proof of complete artistic success. Crash conflates the Freudian tropes of libido and thanatos, overlaying these onto the twentieth century erotic icon, the car. Beyond mere incompetent adolescent copulatory fumblings in the back seat of the parental sedan or the clichéd phallic locomotor of the mid-life Ferrari, Ballard engages the full potentialities of the automobile as the locus and sine qua non of a perverse, though functional erotic. ‘Autoeroticism’ is transformed into automotive, traumatic or surgical paraphilia, driving Helmut Newton’s insipid photo-essays of BDSM and orthopædics into an entirely new dimension, dancing precisely where (but more crucially, because) the ‘body is bruised to pleasure soul’. The serendipity of quotidian accidental collisions is supplanted, in pursuit of the fetishised object, by contrived (though not simulated) recreations of iconographic celebrity deaths. Penetration remains as a guiding trope of sexuality, but it is confounded by a perversity of focus. Such an obsessive pursuit of this autoerotic-as-reality necessitates the rejection of the law of human sexual regulation, requiring the re-interpretation of what constitutes sex itself by looking beyond or through conventional sexuality into Ballard’s paraphiliac and nightmarish consensual Other. This Other allows for (if not demands) the tangled wreckage of a sportscar to function as a transformative sexual agent, creating, of woman, a being of ‘free and perverse sexuality, releasing within its dying chromium and leaking engine-parts, all the deviant possibilities of her sex’.
Resumo:
In this paper we pursue the task of aligning an ensemble of images in an unsupervised manner. This task has been commonly referred to as “congealing” in literature. A form of congealing, using a least-squares criteria, has been recently demonstrated to have desirable properties over conventional congealing. Least-squares congealing can be viewed as an extension of the Lucas & Kanade (LK)image alignment algorithm. It is well understood that the alignment performance for the LK algorithm, when aligning a single image with another, is theoretically and empirically equivalent for additive and compositional warps. In this paper we: (i) demonstrate that this equivalence does not hold for the extended case of congealing, (ii) characterize the inherent drawbacks associated with least-squares congealing when dealing with large numbers of images, and (iii) propose a novel method for circumventing these limitations through the application of an inverse-compositional strategy that maintains the attractive properties of the original method while being able to handle very large numbers of images.
Resumo:
Tort law reform has resulted in legislation being passed by all Australian jurisdictions in the past decade implementing the recommendations contained in the Ipp Report. The report was in response to a perceived crisis in medical indemnity insurance. The objective was to restrict and limit liability in negligence actions. This paper will consider to what extent the reforms have impacted on the liability of health professionals in medical negligence actions. The reversal of the onus of proof through the obvious risk sections has attempted to extend the scope of the defence of voluntary assumption of risk. There is no liability for the materialisation of an inherent risk. Presumptions and mandatory reductions for contributory negligence have attempted to reduce the liability of defendants. It is now possible for reductions of 100% for contributory negligence. Apologies can be made with no admission of legal liability to encourage them being made and thereby reduce the number of actions being commenced. The peer acceptance defence has been introduced and enacted by legislation. There is protection for good samaritans even though the Ipp Report recommended against such protection. Limitation periods have been amended. Provisions relating to mental harm have been introduced re-instating the requirement of normal fortitude and direct perception. After an analysis of the legislation, it will be argued in this paper that while there has been some limitation and restriction, courts have generally interpreted the civil liability reforms in compliance with the common law. It has been the impact of statutory limits on the assessment of damages which has limited the liability of health professionals in medical negligence actions.
Resumo:
Notwithstanding the obvious potential advantages of information and communications technology (ICT) in the enhanced provision of healthcare services, there are some concerns associated with integration of and access to electronic health records. A security violation in health records, such as an unauthorised disclosure or unauthorised alteration of an individual's health information, can significantly undermine both healthcare providers' and consumers' confidence and trust in e-health systems. A crisis in confidence in any national level e-health system could seriously degrade the realisation of the system's potential benefits. In response to the privacy and security requirements for the protection of health information, this research project investigated national and international e-health development activities to identify the necessary requirements for the creation of a trusted health information system architecture consistent with legislative and regulatory requirements and relevant health informatics standards. The research examined the appropriateness and sustainability of the current approaches for the protection of health information. It then proposed an architecture to facilitate the viable and sustainable enforcement of privacy and security in health information systems under the project title "Open and Trusted Health Information Systems (OTHIS)". OTHIS addresses necessary security controls to protect sensitive health information when such data is at rest, during processing and in transit with three separate and achievable security function-based concepts and modules: a) Health Informatics Application Security (HIAS); b) Health Informatics Access Control (HIAC); and c) Health Informatics Network Security (HINS). The outcome of this research is a roadmap for a viable and sustainable architecture for providing robust protection and security of health information including elucidations of three achievable security control subsystem requirements within the proposed architecture. The successful completion of two proof-of-concept prototypes demonstrated the comprehensibility, feasibility and practicality of the HIAC and HIAS models for the development and assessment of trusted health systems. Meanwhile, the OTHIS architecture has provided guidance for technical and security design appropriate to the development and implementation of trusted health information systems whilst simultaneously offering guidance for ongoing research projects. The socio-economic implications of this research can be summarised in the fact that this research embraces the need for low cost security strategies against economic realities by using open-source technologies for overall test implementation. This allows the proposed architecture to be publicly accessible, providing a platform for interoperability to meet real-world application security demands. On the whole, the OTHIS architecture sets a high level of security standard for the establishment and maintenance of both current and future health information systems. This thereby increases healthcare providers‘ and consumers‘ trust in the adoption of electronic health records to realise the associated benefits.