886 resultados para computer based experiments
Resumo:
Two experiments examined the effects of interpersonal and group-based similarity on perceived self-other differences in persuasibility (i.e. on third-person effects, Davison, 1983). Results of Experiment 1 (N=121), based on experimentally-created groups, indicated that third-person perceptions with respect to the impact of televised product ads were accentuated when the comparison was made with interpersonally different others. Contrary to predictions, third-person perceptions were not affected by group-based similarity (i.e. ingroup or outgroup other). Results of Experiment 2 (N = 102), based an an enduring social identity, indicated that both interpersonal and group-based similarity moderated perceptions of the impact on self and other of least-liked product ads. Overall, third-person effects were more pronounced with respect to interpersonally dissimilar others. However, when social identity was salient, information about interpersonal similarity of the target did not affect perceived self-other differences with respect to ingroup targets. Results also highlighted significant differences in third-person perceptions according to the perceiver's affective evaluation of the persuasive message. (C) 1998 John Wiley & Sons, Ltd.
Resumo:
Multiple sampling is widely used in vadose zone percolation experiments to investigate the extent in which soil structure heterogeneities influence the spatial and temporal distributions of water and solutes. In this note, a simple, robust, mathematical model, based on the beta-statistical distribution, is proposed as a method of quantifying the magnitude of heterogeneity in such experiments. The model relies on fitting two parameters, alpha and zeta to the cumulative elution curves generated in multiple-sample percolation experiments. The model does not require knowledge of the soil structure. A homogeneous or uniform distribution of a solute and/or soil-water is indicated by alpha = zeta = 1, Using these parameters, a heterogeneity index (HI) is defined as root 3 times the ratio of the standard deviation and mean. Uniform or homogeneous flow of water or solutes is indicated by HI = 1 and heterogeneity is indicated by HI > 1. A large value for this index may indicate preferential flow. The heterogeneity index relies only on knowledge of the elution curves generated from multiple sample percolation experiments and is, therefore, easily calculated. The index may also be used to describe and compare the differences in solute and soil-water percolation from different experiments. The use of this index is discussed for several different leaching experiments. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
This paper presents the unique collection of additional features of Qu-Prolog, a variant of the Al programming language Prolog, and illustrates how they can be used for implementing DAI applications. By this we mean applications comprising communicating information servers, expert systems, or agents, with sophisticated reasoning capabilities and internal concurrency. Such an application exploits the key features of Qu-Prolog: support for the programming of sound non-clausal inference systems, multi-threading, and high level inter-thread message communication between Qu-Prolog query threads anywhere on the internet. The inter-thread communication uses email style symbolic names for threads, allowing easy construction of distributed applications using public names for threads. How threads react to received messages is specified by a disjunction of reaction rules which the thread periodically executes. A communications API allows smooth integration of components written in C, which to Qu-Prolog, look like remote query threads.
Resumo:
In this paper we present a model of specification-based testing of interactive systems. This model provides the basis for a framework to guide such testing. Interactive systems are traditionally decomposed into a functionality component and a user interface component; this distinction is termed dialogue separation and is the underlying basis for conceptual and architectural models of such systems. Correctness involves both proper behaviour of the user interface and proper computation by the underlying functionality. Specification-based testing is one method used to increase confidence in correctness, but it has had limited application to interactive system development to date.
Resumo:
Interactive health communication using Internet technologies is expanding the range and flexibility of intervention and teaching options available in preventive medicine and the health sciences. Advantages of interactive health communication include the enhanced convenience, novelty, and appeal of computer-mediated communication; its flexibility and interactivity; and automated processing. We outline some of these fundamental aspects of computer-mediated communication as it applies to preventive medicine. Further, a number of key pathways of information technology evolution are creating new opportunities for the delivery of professional education in preventive medicine and other health domains, as well as for delivering automated, self-instructional health behavior-change programs through the Internet. We briefly describe several of these key evolutionary pathways, We describe some examples from work we have done in Australia. These demonstrate how we have creatively responded to the challenges of these new information environments, and how they may be pursued in the education of preventive medicine and other health care practitioners and in the development and delivery of health behavior change programs through the Internet. Innovative and thoughtful applications of this new technology can increase the consistency, reliability, and quality of information delivered.
Resumo:
Cpfg is a program for simulating and visualizing plant development, based on the theory of L-systems. A special-purpose programming language, used to specify plant models, is an essential feature of cpfg. We review postulates of L-system theory that have influenced the design of this language. We then present the main constructs of this language, and evaluate it from a user's perspective.
Resumo:
The World Wide Web (WWW) is useful for distributing scientific data. Most existing web data resources organize their information either in structured flat files or relational databases with basic retrieval capabilities. For databases with one or a few simple relations, these approaches are successful, but they can be cumbersome when there is a data model involving multiple relations between complex data. We believe that knowledge-based resources offer a solution in these cases. Knowledge bases have explicit declarations of the concepts in the domain, along with the relations between them. They are usually organized hierarchically, and provide a global data model with a controlled vocabulary, We have created the OWEB architecture for building online scientific data resources using knowledge bases. OWEB provides a shell for structuring data, providing secure and shared access, and creating computational modules for processing and displaying data. In this paper, we describe the translation of the online immunological database MHCPEP into an OWEB system called MHCWeb. This effort involved building a conceptual model for the data, creating a controlled terminology for the legal values for different types of data, and then translating the original data into the new structure. The 0 WEB environment allows for flexible access to the data by both users and computer programs.
Resumo:
Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.
Resumo:
This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Prospective memory (ProM) is the memory for future actions. It requires retrieving content of anaction in response to an ambiguous cue. Currently, it is unclear if ProM is a distinct form of memory, or merely a variant of retrospective memory (RetM). While content retrieval in ProM appears analogous to conventional RetM, less is known about the process of cue detection. Using a modified version of the standard ProM paradigm, three experiments manipulated stimulus characteristics known to influence RetM, in order to examine their effects on ProM performance. Experiment 1 (N — 80) demonstrated that low frequency stimuli elicited significantly higher hit rates and lower false alarm rates than high frequency stimuli, comparable to the mirror effect in RetM. Experiment 2 (N = 80) replicated these results, and showed that repetition of distracters during the test phase significantly increased false alarm rates to second and subsequent presentations of low frequency distracters. Building on these results. Experiment 3 (AT = 40) showed that when the study list was strengthened, the repeated presentation of targets and distracters did not significantly affect response rates. These experiments demonstrate more overlap between ProM and RetM than has previously been acknowledged. The implications for theories of ProM are considered.
Resumo:
Test templates and a test template framework are introduced as useful concepts in specification-based testing. The framework can be defined using any model-based specification notation and used to derive tests from model-based specifications-in this paper, it is demonstrated using the Z notation. The framework formally defines test data sets and their relation to the operations in a specification and to other test data sets, providing structure to the testing process. Flexibility is preserved, so that many testing strategies can be used. Important application areas of the framework are discussed, including refinement of test data, regression testing, and test oracles.
Resumo:
Evidence for expectancy-based priming in the pronunciation task was provided in three experiments. In Experiments 1 and 2, a high proportion of associatively related trials produced greater associative priming and superior retrieval of primes in a subsequent test of memory for primes, whereas high- and low-proportion groups showed comparable repetition benefits in perceptual identification of previously presented primes. In Experiment 2, the low-proportion condition had few associatively related pairs hut many identity pairs. In Experiment 3, identity priming was greater in a high- than a low-identity proportion group, with similar repetition benefits and prime retrieval responses for the two groups. These results indicate that when the prime-target relationship is salient, subjects strategically vary their processing of the prime according to the nature of the prime-target relationship.
Resumo:
We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.
Resumo:
Nursing diagnoses associated with alterations of urinary elimination require different interventions, Nurses, who are not specialists, require support to diagnose and manage patients with disturbances of urine elimination. The aim of this study was to present a model based on fuzzy logic for differential diagnosis of alterations in urinary elimination, considering nursing diagnosis approved by the North American Nursing Diagnosis Association, 2001-2002. Fuzzy relations and the maximum-minimum composition approach were used to develop the system. The model performance was evaluated with 195 cases from the database of a previous study, resulting in 79.0% of total concordance and 19.5% of partial concordance, when compared with the panel of experts. Total discordance was observed in only three cases (1.5%). The agreement between model and experts was excellent (kappa = 0.98, P < .0001) or substantial (kappa = 0.69, P < .0001) when considering the overestimative accordance (accordance was considered when at least one diagnosis was equal) and the underestimative discordance (discordance was considered when at least one diagnosis was different), respectively. The model herein presented showed good performance and a simple theoretical structure, therefore demanding few computational resources.