909 resultados para Solving Problems for Evidence


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Джурджица Такачи - В доклада се разглеждат дидактически подходи за решаване на задачи, упражнения и доказване на теореми с използване на динамичен софтуер, по-специално – с вече широко разпространената система GeoGebra. Въз основа на концепция-та на Пойа се анализира използването на GeoGebra като когнитивно средство за решаване на задачи и за обсъждане на техни възможни обобщения.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To analyze, in a general population sample, clustering of delusional and hallucinatory experiences in relation to environmental exposures and clinical parameters. METHOD: General population-based household surveys of randomly selected adults between 18 and 65 years of age were carried out. SETTING: 52 countries participating in the World Health Organization's World Health Survey were included. PARTICIPANTS: 225 842 subjects (55.6% women), from nationally representative samples, with an individual response rate of 98.5% within households participated. RESULTS: Compared with isolated delusions and hallucinations, co-occurrence of the two phenomena was associated with poorer outcome including worse general health and functioning status (OR = 0.93; 95% CI: 0.92-0.93), greater severity of symptoms (OR = 2.5 95% CI: 2.0-3.0), higher probability of lifetime diagnosis of psychotic disorder (OR = 12.9; 95% CI: 11.5-14.4), lifetime treatment for psychotic disorder (OR = 19.7; 95% CI: 17.3-22.5), and depression during the last 12 months (OR = 11.6; 95% CI: 10.9-12.4). Co-occurrence was also associated with adversity and hearing problems (OR = 2.0; 95% CI: 1.8-2.3). CONCLUSION: The results suggest that the co-occurrence of hallucinations and delusions in populations is not random but instead can be seen, compared with either phenomenon in isolation, as the result of more etiologic loading leading to a more severe clinical state.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Koopmans gyakorlati problémák megoldása során szerzett tapasztalatait általánosítva fogott hozzá a lineáris tevékenységelemzési modell kidolgozásához. Meglepődve tapasztalta, hogy a korabeli közgazdaságtan nem rendelkezett egységes, kellően egzakt termeléselmélettel és fogalomrendszerrel. Úttörő dolgozatában ezért - mintegy a lineáris tevékenységelemzési modell elméleti kereteként - lerakta a technológiai halmazok fogalmán nyugvó axiomatikus termeléselmélet alapjait is. Nevéhez fűződik a termelési hatékonyság és a hatékonysági árak fogalmának egzakt definíciója, s az egymást kölcsönösen feltételező viszonyuk igazolása a lineáris tevékenységelemzési modell keretében. A hatékonyság manapság használatos, pusztán műszaki szempontból értelmezett definícióját Koopmans csak sajátos esetként tárgyalta, célja a gazdasági hatékonyság fogalmának a bevezetése és elemzése volt. Dolgozatunkban a lineáris programozás dualitási tételei segítségével rekonstruáljuk ez utóbbira vonatkozó eredményeit. Megmutatjuk, hogy egyrészt bizonyításai egyenértékűek a lineáris programozás dualitási tételeinek igazolásával, másrészt a gazdasági hatékonysági árak voltaképpen a mai értelemben vett árnyékárak. Rámutatunk arra is, hogy a gazdasági hatékonyság értelmezéséhez megfogalmazott modellje az Arrow-Debreu-McKenzie-féle általános egyensúlyelméleti modellek közvetlen előzményének tekinthető, tartalmazta azok szinte minden lényeges elemét és fogalmát - az egyensúlyi árak nem mások, mint a Koopmans-féle hatékonysági árak. Végezetül újraértelmezzük Koopmans modelljét a vállalati technológiai mikroökonómiai leírásának lehetséges eszközeként. Journal of Economic Literature (JEL) kód: B23, B41, C61, D20, D50. /===/ Generalizing from his experience in solving practical problems, Koopmans set about devising a linear model for analysing activity. Surprisingly, he found that economics at that time possessed no uniform, sufficiently exact theory of production or system of concepts for it. He set out in a pioneering study to provide a theoretical framework for a linear model for analysing activity by expressing first the axiomatic bases of production theory, which rest on the concept of technological sets. He is associated with exact definition of the concept of production efficiency and efficiency prices, and confirmation of their relation as mutual postulates within the linear model of activity analysis. Koopmans saw the present, purely technical definition of efficiency as a special case; he aimed to introduce and analyse the concept of economic efficiency. The study uses the duality precepts of linear programming to reconstruct the results for the latter. It is shown first that evidence confirming the duality precepts of linear programming is equal in value, and secondly that efficiency prices are really shadow prices in today's sense. Furthermore, the model for the interpretation of economic efficiency can be seen as a direct predecessor of the Arrow–Debreu–McKenzie models of general equilibrium theory, as it contained almost every essential element and concept of them—equilibrium prices are nothing other than Koopmans' efficiency prices. Finally Koopmans' model is reinterpreted as a necessary tool for microeconomic description of enterprise technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis extends previous research on critical decision making and problem-solving by refining and validating a self-report measure designed to assess the use of critical decision making and problem solving in making life choices. The analysis was conducted by performing two studies, and therefore collecting two sets of data on the psychometric properties of the measure. Psychometric analyses included: item analysis, internal consistency reliability, interrater reliability, and an exploratory factor analysis. This study also included regression analysis with the Wonderlic, an established measure of general intelligence, to provide preliminary evidence for the construct validity of the measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Police investigators rely heavily on eliciting confessions from suspects to solve crimes and prosecute offenders. Therefore, it is essential to develop evidence-based interrogation techniques that will motivate guilty suspects to confess but minimize false confessions from the innocent. Currently, there is little scientific support for specific interrogation techniques that may increase true confessions and decrease false confessions. Rapport building is a promising possibility. Despite its recommendation in police interrogation guidelines, there is no scientific evidence showing the effect of rapport building in police interrogations. The current study examined, experimentally, whether using rapport as an interrogation technique would influence participants’ decisions to confess to a wrongdoing. It was hypothesized that building rapport with participants would lead to more true confessions and fewer false confessions than not building rapport. One hundred and sixty nine undergraduates participated in the study. Participants worked on logic problems together and individually, with a study confederate. The confederate asked half of the participants for help in one of the individual problems – effectively breaking the rules of the study. After working on these problems, a research assistant playing the role of interviewer came into the room, built rapport or not with participants, accused all participants of cheating by sharing answers on the individual problems, and asked them to sign a statement admitting their guilt. Results indicated that guilty participants were more likely to sign the confession statement than innocent participants. However, there were no significant differences on participants’ confession decisions based on the level of rapport they experienced. Results do not provide support for the hypothesis that building rapport increases the likelihood of obtaining true confessions and decreases the likelihood of obtaining false confessions. These findings suggest that, despite the overwhelming recommendation for the use of rapport with suspects, its actual implementation may not have a direct impact on the outcome of interrogations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Many breast cancer survivors continue to have a broad range of physical and psychosocial problems after breast cancer treatment. As cancer centres move forward with earlier discharge of stable breast cancer survivors to primary care follow-up it is important that comprehensive evidence-based breast cancer survivorship care is implemented to effectively address these needs. Research suggests primary care providers are willing to provide breast cancer survivorship care but many lack the knowledge and confidence to provide evidence-based care. Purpose The overall purpose of this thesis was to determine the challenges, strengths and opportunities related to implementing comprehensive evidence-based breast cancer survivorship guidelines by primary care physicians and nurse practitioners in southeastern Ontario. Methods This mixed-methods research was conducted in three phases: (1) synthesis and appraisal of clinical practice guidelines relevant to provision of breast cancer survivorship care within the primary care practice setting; (2) a brief quantitative survey of primary care providers to determine actual practices related to provision of evidence-based breast cancer survivorship care; and (3) individual interviews with primary care providers about the challenges, strengths and opportunities related to provision of comprehensive evidence-based breast cancer survivorship care. Results and Conclusions In the first phase, a comprehensive clinical practice framework was created to guide provision of breast cancer survivorship care and consisted of a one-page checklist outlining breast cancer survivorship issues relevant to primary care, a three-page summary of key recommendations, and a one-page list of guideline sources. The second phase identified several knowledge and practice gaps, and it was determined that guideline implementation rates were higher for recommendations related to prevention and surveillance aspects of survivorship care and lowest related to screening for and management of long-term effects. The third phase identified three major challenges to providing breast cancer survivorship care: inconsistent educational preparation, provider anxieties, and primary care burden; and three major strengths or opportunities to facilitate implementation of survivorship care guidelines: tools and technology, empowering survivors, and optimizing nursing roles. A better understanding of these challenges, strengths and opportunities will inform development of targeted knowledge translation interventions to provide support and education to primary care providers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper first summarizes the results of an empirical investigation of borrowing and repayment patterns of post-secondary graduates, then addresses a number of related policy issues, including i) the need for further research to generate the information needed to fully evaluate the student loan system, ii) the advantages of extending the assistance available for those facing problems with their debt burdens in the post-schooling period, iii) the need to increase borrowing limits, iii) the efficiency and equity advantages of providing assistance to post-secondary students through loans rather than the grants which many have been calling for, and iii) a proposal for revitalising the cash-strapped post-secondary system with infusions from both federal and provincial governments and students themselves of equal parts, the latter facilitated by the appropriate changes in the loan system (higher limits and more support for those who run into trouble with repayment).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While most students seem to solve information problems effortlessly, research shows that the cognitive skills for effective information problem solving are often underdeveloped. Students manage to find information and formulate solutions, but the quality of their process and product is questionable. It is therefore important to develop instruction for fostering these skills. In this research, a 2-h online intervention was presented to first-year university students with the goal to improve their information problem solving skills while investigating effects of different types of built-in task support. A training design containing completion tasks was compared to a design using emphasis manipulation. A third variant of the training combined both approaches. In two experiments, these conditions were compared to a control condition receiving conventional tasks without built-in task support. Results of both experiments show that students' information problem solving skills are underdeveloped, which underlines the necessity for formal training. While the intervention improved students’ skills, no differences were found between conditions. The authors hypothesize that the effective presentation of supportive information in the form of a modeling example at the start of the training caused a strong learning effect, which masked effects of task support. Limitations and directions for future research are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a structural engineer, effective communication and interaction with architects cannot be underestimated as a key skill to success throughout their professional career. Structural engineers and architects have to share a common language and understanding of each other in order to achieve the most desirable architectural and structural designs. This interaction and engagement develops during their professional career but needs to be nurtured during their undergraduate studies. The objective of this paper is to present the strategies employed to engage higher order thinking in structural engineering students in order to help them solve complex problem-based learning (PBL) design scenarios presented by architecture students. The strategies employed were applied in the experimental setting of an undergraduate module in structural engineering at Queen’s University Belfast in the UK. The strategies employed were active learning to engage with content knowledge, the use of physical conceptual structural models to reinforce key concepts and finally, reinforcing the need for hand sketching of ideas to promote higher order problem-solving. The strategies employed were evaluated through student survey, student feedback and module facilitator (this author) reflection. The strategies were qualitatively perceived by the tutor and quantitatively evaluated by students in a cross-sectional study to help interaction with the architecture students, aid interdisciplinary learning and help students creatively solve problems (through higher order thinking). The students clearly enjoyed this module and in particular interacting with structural engineering tutors and students from another discipline

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La programmation par contraintes est une technique puissante pour résoudre, entre autres, des problèmes d’ordonnancement de grande envergure. L’ordonnancement vise à allouer dans le temps des tâches à des ressources. Lors de son exécution, une tâche consomme une ressource à un taux constant. Généralement, on cherche à optimiser une fonction objectif telle la durée totale d’un ordonnancement. Résoudre un problème d’ordonnancement signifie trouver quand chaque tâche doit débuter et quelle ressource doit l’exécuter. La plupart des problèmes d’ordonnancement sont NP-Difficiles. Conséquemment, il n’existe aucun algorithme connu capable de les résoudre en temps polynomial. Cependant, il existe des spécialisations aux problèmes d’ordonnancement qui ne sont pas NP-Complet. Ces problèmes peuvent être résolus en temps polynomial en utilisant des algorithmes qui leur sont propres. Notre objectif est d’explorer ces algorithmes d’ordonnancement dans plusieurs contextes variés. Les techniques de filtrage ont beaucoup évolué dans les dernières années en ordonnancement basé sur les contraintes. La proéminence des algorithmes de filtrage repose sur leur habilité à réduire l’arbre de recherche en excluant les valeurs des domaines qui ne participent pas à des solutions au problème. Nous proposons des améliorations et présentons des algorithmes de filtrage plus efficaces pour résoudre des problèmes classiques d’ordonnancement. De plus, nous présentons des adaptations de techniques de filtrage pour le cas où les tâches peuvent être retardées. Nous considérons aussi différentes propriétés de problèmes industriels et résolvons plus efficacement des problèmes où le critère d’optimisation n’est pas nécessairement le moment où la dernière tâche se termine. Par exemple, nous présentons des algorithmes à temps polynomial pour le cas où la quantité de ressources fluctue dans le temps, ou quand le coût d’exécuter une tâche au temps t dépend de t.