967 resultados para Intractable Likelihood
Resumo:
BACKGROUND AND OBJECTIVE: To evaluate the long-term outcome of aqueous shunts in the treatment of infantile glaucoma refractory to conventional treatment. PATIENTS AND METHODS: The records of all patients up to 3 years of age managed with aqueous shunts for uncontrolled glaucoma between November 1990 and November 1996 were retrospectively reviewed. Ten eyes of 6 patients were included in the study. RESULTS: The mean preoperative intraocular pressure (IOP) was 29.75 ± 4.15 (mm Hg; SD), with none of the eyes on antiglaucoma medication. Postoperatively, the mean IOP was 18.25 ± 5.34 (mm Hg; SD) at a mean follow up of 50 ± 25.6 (SD) months with 7 eyes on topical antiglaucoma medication. At the final follow up, 6 eyes were considered successfully controlled without reintervention, 2 more were controlled after shunt revision, and 2 were considered failures. CONCLUSIONS: Aqueous shunts were relatively effective in this series of infants with recalcitrant glaucoma.
Resumo:
Although remarriage is a relatively common transition, little is known about how nonresident fathers affect divorced mothers’ entry into remarriage. Using the 1979–2010 rounds of the National Longitudinal Study of Youth 1979, the authors examined the likelihood of remarriage for divorced mothers (N = 882) by nonresident father contact with children and payment of child support. The findings suggest that maternal remarriage is positively associated with nonresident father contact but not related to receiving child support.
Resumo:
Although remarriage is a relatively common transition, we know little about how nonresident fathers affect divorced mothers’ entry into remarriage. Using the 1979-2010 rounds of the National Longitudinal Study of Youth 1979, we examined the likelihood of remarriage for divorced mothers (n=882) by nonresident father contact with children and payment of child support. The findings suggest that maternal remarriage is positively associated with nonresident father contact but not related to receiving child support.
Resumo:
In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.
Resumo:
The chapter explores Bar-Tal’s legacy in relation to key concepts, perspectives, and findings that comprise the growing field of peace psychology, specifically the promotion of sustainable peace through the indivisible constructs of harmonious relations and equitable wellbeing. Analyzed through a peace psychology lens, Bar-Tal’s work highlights both the barriers to and bridges for achieving sustainable peace. Central concepts from his work, such as fear, insecurity, and an ethos of conflict, demonstrate key obstacles to fostering harmonious intergroup relations based on social justice. Bar-Tal’s work also identifies processes that can overcome these barriers, which is consistent with peace psychology’s emphasis on the development of constructive responses to violence and conflict. For example, the chapter outlines how confidence-building mechanisms, mutually respectful identities, and reconciliation processes, may help foster an ethos of peace that can be embedded in the structure of societies through peace education. The chapter concludes with implications and suggestions for future research, with a focus on the role of young people in settings of prolonged intergroup division and generational approaches to peacebuilding, as conceptualized through a peace psychology lens.
Resumo:
OBJECTIVE: The present study aimed to evaluate the precision, ease of use and likelihood of future use of portion size estimation aids (PSEA).
DESIGN: A range of PSEA were used to estimate the serving sizes of a range of commonly eaten foods and rated for ease of use and likelihood of future usage.
SETTING: For each food, participants selected their preferred PSEA from a range of options including: quantities and measures; reference objects; measuring; and indicators on food packets. These PSEA were used to serve out various foods (e.g. liquid, amorphous, and composite dishes). Ease of use and likelihood of future use were noted. The foods were weighed to determine the precision of each PSEA.
SUBJECTS: Males and females aged 18-64 years (n 120).
RESULTS: The quantities and measures were the most precise PSEA (lowest range of weights for estimated portion sizes). However, participants preferred household measures (e.g. 200 ml disposable cup) - deemed easy to use (median rating of 5), likely to use again in future (all scored either 4 or 5 on a scale from 1='not very likely' to 5='very likely to use again') and precise (narrow range of weights for estimated portion sizes). The majority indicated they would most likely use the PSEA preparing a meal (94 %), particularly dinner (86 %) in the home (89 %; all P<0·001) for amorphous grain foods.
CONCLUSIONS: Household measures may be precise, easy to use and acceptable aids for estimating the appropriate portion size of amorphous grain foods.
Resumo:
Purpose This research investigates the relationship between students’ entrepreneurial attitudes and traits and their classification of employment six months after university graduation. It aims to identify what specific attitudes and traits of entrepreneurial graduates are linked to employability in a professional or managerial field. Design/Methodology The research adopts a quantitative approach to measure the entrepreneurial drive of final-year undergraduate business school students and regresses this measurement against the employment level of the same students six months after their graduation. The employment classification of each respondent was classified as ‘professional/managerial’ or ‘non-professional/non-managerial’, in line with the Standard Occupational Classification (SOC) 2010. Findings The research found that both proactive disposition and achievement motivation were statistically linked to the likelihood of graduates being employed in a professional or managerial position six months after graduation. Originality/Value This research goes beyond existing literature linking entrepreneurship to employability to quantitatively examine what specific attitudes and traits can be linked to employability in recent graduates. By identifying the aspects of entrepreneurialism that have a relationship with employability, more information is available for educators who are designing entrepreneurial education programs and allows for greater focus on aspects that may be of greatest benefit to all students.
Resumo:
The effects of a complexly worded counterattitudinal appeal on laypeople's attitudes toward a legal issue were examined, using the Elaboration Likelihood Model (ELM) of persuasion as a theoretical framework. This model states that persuasion can result from the elaboration and scrutiny of the message arguments (i.e., central route processing), or can result from less cognitively effortful strategies, such as relying on source characteristics as a cue to message validity (i.e., peripheral route processing). One hundred and sixty-seven undergraduates (85 men and 81 women) listened to eitller a low status or high status source deliver a counterattitudinal speech on a legal issue. The speech was designed to contain strong or weak arguments. These arguments were 'worded in a simple and, therefore, easy to comprehend manner, or in a complex and, therefore, difficult to comprehend manner. Thus, there were three experimental manipulations: argument comprehensibility (easy to comprehend vs. difficult to comprehend), argumel11 strength (weak vs. strong), and source status (low vs. high). After listening to tIle speec.J] participants completed a measure 'of their attitude toward the legal issue, a thought listil1g task, an argument recall task,manipulation checks, measures of motivation to process the message, and measures of mood. As a result of the failure of the argument strength manipulation, only the effects of the comprehel1sibility and source status manipulations were tested. There was, however, some evidence of more central route processing in the easy comprehension condition than in the difficult comprehension condition, as predicted. Significant correlations were found between attitude and favourable and unfavourable thoughts about the legal issue with easy to comprehend arguments; whereas, there was a correlation only between attitude and favourable thoughts 11 toward the issue with difficult to comprehend arguments, suggesting, perhaps, that central route processing, \vhich involves argument scrutiny and elaboration, occurred under conditions of easy comprehension to a greater extent than under conditions of difficult comprehension. The results also revealed, among other findings, several significant effects of gender. Men had more favourable attitudes toward the legal issue than did women, men recalled more arguments from the speech than did women, men were less frustrated while listening to the speech than were ,vomen, and men put more effort into thinking about the message arguments than did women. When the arguments were difficult to comprehend, men had more favourable thoughts and fewer unfavourable thoughts about the legal issue than did women. Men and women may have had different affective responses to the issue of plea bargaining (with women responding more negatively than men), especially in light of a local and controversial plea bargain that occurred around the time of this study. Such pre-existing gender differences may have led to tIle lower frustration, the greater effort, the greater recall, and more positive attitudes for men than for WOlnen. Results· from this study suggest that current cognitive models of persuasion may not be very applicable to controversial issues which elicit strong emotional responses. Finally, these data indicate that affective responses, the controversial and emotional nature ofthe issue, gender and other individual differences are important considerations when experts are attempting to persuade laypeople toward a counterattitudinal position.
Resumo:
Affiliation: Claudia Kleinman, Nicolas Rodrigue & Hervé Philippe : Département de biochimie, Faculté de médecine, Université de Montréal
Resumo:
Parmi les méthodes d’estimation de paramètres de loi de probabilité en statistique, le maximum de vraisemblance est une des techniques les plus populaires, comme, sous des conditions l´egères, les estimateurs ainsi produits sont consistants et asymptotiquement efficaces. Les problèmes de maximum de vraisemblance peuvent être traités comme des problèmes de programmation non linéaires, éventuellement non convexe, pour lesquels deux grandes classes de méthodes de résolution sont les techniques de région de confiance et les méthodes de recherche linéaire. En outre, il est possible d’exploiter la structure de ces problèmes pour tenter d’accélerer la convergence de ces méthodes, sous certaines hypothèses. Dans ce travail, nous revisitons certaines approches classiques ou récemment d´eveloppées en optimisation non linéaire, dans le contexte particulier de l’estimation de maximum de vraisemblance. Nous développons également de nouveaux algorithmes pour résoudre ce problème, reconsidérant différentes techniques d’approximation de hessiens, et proposons de nouvelles méthodes de calcul de pas, en particulier dans le cadre des algorithmes de recherche linéaire. Il s’agit notamment d’algorithmes nous permettant de changer d’approximation de hessien et d’adapter la longueur du pas dans une direction de recherche fixée. Finalement, nous évaluons l’efficacité numérique des méthodes proposées dans le cadre de l’estimation de modèles de choix discrets, en particulier les modèles logit mélangés.
Resumo:
In dieser Arbeit werden mithilfe der Likelihood-Tiefen, eingeführt von Mizera und Müller (2004), (ausreißer-)robuste Schätzfunktionen und Tests für den unbekannten Parameter einer stetigen Dichtefunktion entwickelt. Die entwickelten Verfahren werden dann auf drei verschiedene Verteilungen angewandt. Für eindimensionale Parameter wird die Likelihood-Tiefe eines Parameters im Datensatz als das Minimum aus dem Anteil der Daten, für die die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, und dem Anteil der Daten, für die diese Ableitung nicht positiv ist, berechnet. Damit hat der Parameter die größte Tiefe, für den beide Anzahlen gleich groß sind. Dieser wird zunächst als Schätzer gewählt, da die Likelihood-Tiefe ein Maß dafür sein soll, wie gut ein Parameter zum Datensatz passt. Asymptotisch hat der Parameter die größte Tiefe, für den die Wahrscheinlichkeit, dass für eine Beobachtung die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, gleich einhalb ist. Wenn dies für den zu Grunde liegenden Parameter nicht der Fall ist, ist der Schätzer basierend auf der Likelihood-Tiefe verfälscht. In dieser Arbeit wird gezeigt, wie diese Verfälschung korrigiert werden kann sodass die korrigierten Schätzer konsistente Schätzungen bilden. Zur Entwicklung von Tests für den Parameter, wird die von Müller (2005) entwickelte Simplex Likelihood-Tiefe, die eine U-Statistik ist, benutzt. Es zeigt sich, dass für dieselben Verteilungen, für die die Likelihood-Tiefe verfälschte Schätzer liefert, die Simplex Likelihood-Tiefe eine unverfälschte U-Statistik ist. Damit ist insbesondere die asymptotische Verteilung bekannt und es lassen sich Tests für verschiedene Hypothesen formulieren. Die Verschiebung in der Tiefe führt aber für einige Hypothesen zu einer schlechten Güte des zugehörigen Tests. Es werden daher korrigierte Tests eingeführt und Voraussetzungen angegeben, unter denen diese dann konsistent sind. Die Arbeit besteht aus zwei Teilen. Im ersten Teil der Arbeit wird die allgemeine Theorie über die Schätzfunktionen und Tests dargestellt und zudem deren jeweiligen Konsistenz gezeigt. Im zweiten Teil wird die Theorie auf drei verschiedene Verteilungen angewandt: Die Weibull-Verteilung, die Gauß- und die Gumbel-Copula. Damit wird gezeigt, wie die Verfahren des ersten Teils genutzt werden können, um (robuste) konsistente Schätzfunktionen und Tests für den unbekannten Parameter der Verteilung herzuleiten. Insgesamt zeigt sich, dass für die drei Verteilungen mithilfe der Likelihood-Tiefen robuste Schätzfunktionen und Tests gefunden werden können. In unverfälschten Daten sind vorhandene Standardmethoden zum Teil überlegen, jedoch zeigt sich der Vorteil der neuen Methoden in kontaminierten Daten und Daten mit Ausreißern.
Resumo:
We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.