969 resultados para computer evidence
Resumo:
Dealing with uncertainty problems in intelligent systems has attracted a lot of attention in the AI community. Quite a few techniques have been proposed. Among them, the Dempster-Shafer theory of evidence (DS theory) has been widely appreciated. In DS theory, Dempster's combination rule plays a major role. However, it has been pointed out that the application domains of the rule are rather limited and the application of the theory sometimes gives unexpected results. We have previously explored the problem with Dempster's combination rule and proposed an alternative combination mechanism in generalized incidence calculus. In this paper we give a comprehensive comparison between generalized incidence calculus and the Dempster-Shafer theory of evidence. We first prove that these two theories have the same ability in representing evidence and combining DS-independent evidence. We then show that the new approach can deal with some dependent situations while Dempster's combination rule cannot. Various examples in the paper show the ways of using generalized incidence calculus in expert systems.
Resumo:
The rejoining kinetics of double-stranded DNA fragments, along with measurements of residual damage after postirradiation incubation, are often used as indicators of the biological relevance of the damage induced by ionizing radiation of different qualities. Although it is widely accepted that high-LET radiation-induced double-strand breaks (DSBs) tend to rejoin with kinetics slower than low-LET radiation-induced DSBs, possibly due to the complexity of the DSB itself, the nature of a slowly rejoining DSB-containing DNA lesion remains unknown. Using an approach that combines pulsed-field gel electrophoresis (PFGE) of fragmented DNA from human skin fibroblasts and a recently developed Monte Carlo simulation of radiation-induced DNA breakage and rejoining kinetics, we have tested the role of DSB-containing DNA lesions in the 8-kbp-5.7-Mbp fragment size range in determining the DSB rejoining kinetics. It is found that with low-LET X rays or high LET alpha particles, DSB rejoining kinetics data obtained with PFGE can be computer-simulated assuming that DSB rejoining kinetics does not depend on spacing of breaks along the chromosomes. After analysis of DNA fragmentation profiles, the rejoining kinetics of X-ray-induced DSBs could be fitted by two components: a fast component with a half-life of 0.9 +/- 0.5 h and a slow component with a half-life of 16 +/- 9 h. For a particles, a fast component with a half-life of 0.7 +/- 0.4 h and a slow component with a half-life of 12 5 h along with a residual fraction of unrepaired breaks accounting for 8% of the initial damage were observed. In summary, it is shown that genomic proximity of breaks along a chromosome does not determine the rejoining kinetics, so the slowly rejoining breaks induced with higher frequencies after exposure to high-LET radiation (0.37 +/- 0.12) relative to low-LET radiation (0.22 +/- 0.07) can be explained on the basis of lesion complexity at the nanometer scale, known as locally multiply damaged sites. (c) 2005 by Radiation Research Society.
Resumo:
This article reports results of an experiment designed to analyze the link between risky decisions made by couples and risky decisions made separately by each spouse. We estimate both the spouses and the couples' degrees of risk aversion, we assess how the risk preferences of the two spouses aggregate when they make risky decisions, and we shed light on the dynamics of the decision process that takes place when couples make risky decisions. We find that, far from being fixed, the balance of power within the household is malleable. In most couples, men have, initially, more decision-making power than women but women who ultimately implement the joint decisions gain more and more power over the course of decision making.
Resumo:
This article presents a systematic review of research on the achievement outcomes of all types of approaches to teaching science in elementary schools. Study inclusion criteria included use of randomized or matched control groups, a study duration of at least 4 weeks, and use of achievement measures independent of the experimental treatment. A total of 23 studies met these criteria. Among studies evaluating inquiry-based teaching approaches, programs that used science kits did not show positive outcomes on science achievement measures (weighted ES=+0.02 in 7 studies), but inquiry-based programs that emphasized professional development but not kits did show positive outcomes (weighted ES=+0.36 in 10 studies). Technological approaches integrating video and computer resources with teaching and cooperative learning showed positive outcomes in a few small, matched studies (ES=+0.42 in 6 studies). The review concludes that science teaching methods focused on enhancing teachers’ classroom instruction throughout the year, such as cooperative learning and science-reading integration, as well as approaches that give teachers technology tools to enhance instruction, have significant potential to improve science learning.
Resumo:
Distinct neural populations carry signals from short-wave (S) cones. We used individual differences to test whether two types of pathways, those that receive excitatory input (S+) and those that receive inhibitory input (S-), contribute independently to psychophysical performance. We also conducted a genome-wide association study (GWAS) to look for genetic correlates of the individual differences. Our psychophysical test was based on the Cambridge Color Test, but detection thresholds were measured separately for S-cone spatial increments and decrements. Our participants were 1060 healthy adults aged 16-40. Test-retest reliabilities for thresholds were good (ρ=0.64 for S-cone increments, 0.67 for decrements and 0.73 for the average of the two). "Regression scores," isolating variability unique to incremental or decremental sensitivity, were also reliable (ρ=0.53 for increments and ρ=0.51 for decrements). The correlation between incremental and decremental thresholds was ρ=0.65. No genetic markers reached genome-wide significance (p-7). We identified 18 "suggestive" loci (p-5). The significant test-retest reliabilities show stable individual differences in S-cone sensitivity in a normal adult population. Though a portion of the variance in sensitivity is shared between incremental and decremental sensitivity, over 26% of the variance is stable across individuals, but unique to increments or decrements, suggesting distinct neural substrates. Some of the variability in sensitivity is likely to be genetic. We note that four of the suggestive associations found in the GWAS are with genes that are involved in glucose metabolism or have been associated with diabetes.
Resumo:
Mental illness is common amongst young people living in residential care, many of whom are reluctant to avail of therapeutic help. The potential value of computer games as therapeutic tools for these young people has received very little attention, despite indications of their potential for promoting engagement in therapeutic work and improving mental health outcomes. This study aimed to fill this research gap through the development, introduction, and preliminary evaluation of a therapeutic intervention in group care settings. The intervention incorporated a commercially available computer game (The SIMS Life Stories™) and emotion regulation skill coaching. Qualified residential social workers were trained to deliver it to young people in three children's homes in Northern Ireland, where therapeutic approaches to social work had been introduced. The research was framed as an exploratory case study which aimed to determine the acceptability and potential therapeutic value of this intervention. The evidence suggests that computer-game based interventions of this type may have value as therapeutic tools in group care settings and deserve further development and empirical investigation to determine their effectiveness in improving mental health outcomes.
Resumo:
In this paper we present a new event recognition framework, based on the Dempster-Shafer theory of evidence, which combines the evidence from multiple atomic events detected by low-level computer vision analytics. The proposed framework employs evidential network modelling of composite events. This approach can effectively handle the uncertainty of the detected events, whilst inferring high-level events that have semantic meaning with high degrees of belief. Our scheme has been comprehensively evaluated against various scenarios that simulate passenger behaviour on public transport platforms such as buses and trains. The average accuracy rate of our method is 81% in comparison to 76% by a standard rule-based method.
Resumo:
Combination rules proposed so far in the Dempster-Shafer theory of evidence, especially Dempster rule, rely on a basic assumption, that is, pieces of evidence being combined are considered to be on a par, i.e. play the same role. When a source of evidence is less reliable than another, it is possible to discount it and then a symmetric combination operation is still used. In the case of revision, the idea is to let prior knowledge of an agent be altered by some input information. The change problem is thus intrinsically asymmetric. Assuming the input information is reliable, it should be retained whilst the prior information should
be changed minimally to that effect. Although belief revision is already an important subfield of artificial intelligence, so far, it has been little addressed in evidence theory. In this paper, we define the notion of revision for the theory of evidence and propose several different revision rules, called the inner and outer
revisions, and a modified adaptive outer revision, which better corresponds to the idea of revision. Properties of these revision rules are also investigated.
Resumo:
This case study examines the impact of a computer information system as it was being implemented in one Ontario hospital. The attitudes of a cross section of the hospital staff acted as a barometer to measure their perceptions of the implementation process. With The Mississauga Hospital in the early stages of an extensive computer implementation project, the opportunity existed to identify staff attitudes about the computer system, overall knowledge and compare the findings with the literature. The goal of the study was to develop a greater base about the affective domain in the relationship between people and the computer system. Eight exploratory questions shaped the focus of the investigation. Data were collected from three sources: a survey questionnaire, focused interviews, and internal hospital documents. Both quantitative and qualitative data were analyzed. Instrumentation in the study consisted of a survey distributed at two points in time to randomly selected hospital employees who represented all staff levels.Other sources of data included hospital documents, and twenty-five focused interviews with staff who replied to both surveys. Leavitt's socio-technical system, with its four subsystems: task, structure, technology, and people was used to classify staff responses to the research questions. The study findings revealed that the majority of respondents felt positive about using the computer as part of their jobs. No apparent correlations were found between sex, age, or staff group and feelings about using the computer. Differences in attitudes, and attitude changes were found in potential relationship to the element of time. Another difference was found in staff group and perception of being involved in the decision making process. These findings and other evidence about the role of change agents in this change process help to emphasize that planning change is one thing, managing the transition is another.
Resumo:
With the great advancement of computer technologies, electronic information starts to play a more and more important role in modern business transactions. Therefore, electronic data, such as e-mail, is frequently required in the process of litigation. Companies, on the one hand, have the legal obligations to produce this kind of e-mail evidence. On the other hand, they also undertake a high cost of e-mail evidence preservation due to the great volume on a daily basis. This Article firstly analyzed features of e-mail evidence with the comparison of paper evidence. Then, it discussed about how e-mail is authenticated and admitted into evidence. By using the case laws in different legal aspects and current Canadian legislations, the Author demonstrated the importance of e-mail evidence preservation in ordinary business course. After that, the Article focused on the practical dilemma of the companies between their legal obligation and the expensive cost to preserve e-mail evidence. Finally, the Author proposed suggestions to both companies and courts on how to coordinate the obligation and cost. More specifically, while companies should adopt a document management policy to implement e-mail evidence preservation, courts need to take into consideration of the high cost of e-mail evidence preservation in electronic discovery.
Resumo:
Einhergehend mit der Entwicklung und zunehmenden Verfügbarkeit des Internets hat sich die Art der Informationsbereitstellung und der Informationsbeschaffung deutlich geändert. Die einstmalige Trennung zwischen Publizist und Konsument wird durch kollaborative Anwendungen des sogenannten Web 2.0 aufgehoben, wo jeder Teilnehmer gleichsam Informationen bereitstellen und konsumieren kann. Zudem können Einträge anderer Teilnehmer erweitert, kommentiert oder diskutiert werden. Mit dem Social Web treten schließlich die sozialen Beziehungen und Interaktionen der Teilnehmer in den Vordergrund. Dank mobiler Endgeräte können zu jeder Zeit und an nahezu jedem Ort Nachrichten verschickt und gelesen werden, neue Bekannschaften gemacht oder der aktuelle Status dem virtuellen Freundeskreis mitgeteilt werden. Mit jeder Aktivität innerhalb einer solchen Applikation setzt sich ein Teilnehmer in Beziehung zu Datenobjekten und/oder anderen Teilnehmern. Dies kann explizit geschehen, indem z.B. ein Artikel geschrieben wird und per E-Mail an Freunde verschickt wird. Beziehungen zwischen Datenobjekten und Nutzern fallen aber auch implizit an, wenn z.B. die Profilseite eines anderen Teilnehmers aufgerufen wird oder wenn verschiedene Teilnehmer einen Artikel ähnlich bewerten. Im Rahmen dieser Arbeit wird ein formaler Ansatz zur Analyse und Nutzbarmachung von Beziehungsstrukturen entwickelt, welcher auf solchen expliziten und impliziten Datenspuren aufbaut. In einem ersten Teil widmet sich diese Arbeit der Analyse von Beziehungen zwischen Nutzern in Applikationen des Social Web unter Anwendung von Methoden der sozialen Netzwerkanalyse. Innerhalb einer typischen sozialen Webanwendung haben Nutzer verschiedene Möglichkeiten zu interagieren. Aus jedem Interaktionsmuster werden Beziehungsstrukturen zwischen Nutzern abgeleitet. Der Vorteil der impliziten Nutzer-Interaktionen besteht darin, dass diese häufig vorkommen und quasi nebenbei im Betrieb des Systems abfallen. Jedoch ist anzunehmen, dass eine explizit angegebene Freundschaftsbeziehung eine stärkere Aussagekraft hat, als entsprechende implizite Interaktionen. Ein erster Schwerpunkt dieser Arbeit ist entsprechend der Vergleich verschiedener Beziehungsstrukturen innerhalb einer sozialen Webanwendung. Der zweite Teil dieser Arbeit widmet sich der Analyse eines der weit verbreitetsten Profil-Attributen von Nutzern in sozialen Webanwendungen, dem Vornamen. Hierbei finden die im ersten Teil vorgestellten Verfahren und Analysen Anwendung, d.h. es werden Beziehungsnetzwerke für Namen aus Daten von sozialen Webanwendungen gewonnen und mit Methoden der sozialen Netzwerkanalyse untersucht. Mithilfe externer Beschreibungen von Vornamen werden semantische Ähnlichkeiten zwischen Namen bestimmt und mit jeweiligen strukturellen Ähnlichkeiten in den verschiedenen Beziehungsnetzwerken verglichen. Die Bestimmung von ähnlichen Namen entspricht in einer praktischen Anwendung der Suche von werdenden Eltern nach einem passenden Vornamen. Die Ergebnisse zu der Analyse von Namensbeziehungen sind die Grundlage für die Implementierung der Namenssuchmaschine Nameling, welche im Rahmen dieser Arbeit entwickelt wurde. Mehr als 35.000 Nutzer griffen innerhalb der ersten sechs Monate nach Inbetriebnahme auf Nameling zu. Die hierbei anfallenden Nutzungsdaten wiederum geben Aufschluss über individuelle Vornamenspräferenzen der Anwender. Im Rahmen dieser Arbeit werden diese Nutzungsdaten vorgestellt und zur Bestimmung sowie Bewertung von personalisierten Vornamensempfehlungen verwendet. Abschließend werden Ansätze zur Diversifizierung von personalisierten Vornamensempfehlungen vorgestellt, welche statische Beziehungsnetzwerke für Namen mit den individuellen Nutzungsdaten verknüpft.
Resumo:
The role of migration in the Anglo-Saxon transition in England remains controversial. Archaeological and historical evidence is inconclusive, but current estimates of the contribution of migrants to the English population range from less than 10 000 to as many as 200 000. In contrast, recent studies based on Y-chromosome variation posit a considerably higher contribution to the modern English gene pool (50-100%). Historical evidence suggests that following the Anglo-Saxon transition, people of indigenous ethnicity were at an economic and legal disadvantage compared to those having Anglo-Saxon ethnicity. It is likely that such a disadvantage would lead to differential reproductive success. We examine the effect of differential reproductive success, coupled with limited intermarriage between distinct ethnic groups, on the spread of genetic variants. Computer simulations indicate that a social structure limiting intermarriage between indigenous Britons and an initially small Anglo-Saxon immigrant population provide a plausible explanation of the high degree of Continental male-line ancestry in England.
Resumo:
The meltabilities of 14 process cheese samples were determined at 2 and 4 weeks after manufacture using sensory analysis, a computer vision method, and the Olson and Price test. Sensory analysis meltability correlated with both computer vision meltability (R-2 = 0.71, P < 0.001) and Olson and Price meltability (R-2 = 0.69, P < 0.001). There was a marked lack of correlation between the computer vision method and the Olson and Price test. This study showed that the Olson and Price test gave greater repeatability than the computer vision method. Results showed process cheese meltability decreased with increasing inorganic salt content and with lower moisture/fat ratios. There was very little evidence in this study to show that process cheese meltability changed between 2 and 4 weeks after manufacture..