916 resultados para information processing model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The more information is available, and the more predictable are events, the better forecasts ought to be. In this paper forecasts by bookmakers, prediction markets and tipsters are evaluated for a range of events with varying degrees of predictability and information availability. All three types of forecast represent different structures of information processing and as such would be expected to perform differently. By and large, events that are more predictable, and for which more information is available, do tend to be forecast better.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Studies on learning management systems have largely been technical in nature with an emphasis on the evaluation of the human computer interaction (HCI) processes in using the LMS. This paper reports a study that evaluates the information interaction processes on an eLearning course used in teaching an applied Statistics course. The eLearning course is used as a synonym for information systems. The study explores issues of missing context in stored information in information systems. Using the semiotic framework as a guide, the researchers evaluated an existing eLearning course with the view to proposing a model for designing improved eLearning courses for future eLearning programmes. In this exploratory study, a survey questionnaire is used to collect data from 160 participants on an eLearning course in Statistics in Applied Climatology. The views of the participants are analysed with a focus on only the human information interaction issues. Using the semiotic framework as a guide, syntactic, semantic, pragmatic and social context gaps or problems were identified. The information interactions problems identified include ambiguous instructions, inadequate information, lack of sound, interface design problems among others. These problems affected the quality of new knowledge created by the participants. The researchers thus highlighted the challenges of missing information context when data is stored in an information system. The study concludes by proposing a human information interaction model for improving the information interaction quality issues in the design of eLearning course on learning management platforms and those other information systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study investigated the orienting of visual attention in rats using a 3-hole nose-poke task analogous to Posner, Information processing in cognition: the Loyola Symposium, Erlbaum, Hillsdale, (1980) covert attention task for humans. The effects of non-predictive (50% valid and 50% invalid) and predictive (80% valid and 20% invalid) peripheral visual cues on reaction times and response accuracy to a target stimulus, using Stimuli-Onset Asynchronies (SOAs) varying between 200 and 1,200 ms, were investigated. The results showed shorter reaction times in valid trials relative to invalid trials for both subjects trained in the non-predictive and predictive conditions, particularly when the SOAs were 200 and 400 ms. However, the magnitude of this validity effect was significantly greater for subjects exposed to predictive cues, when the SOA was 800 ms. Subjects exposed to invalid predictive cues exhibited an increase in omission errors relative to subjects exposed to invalid non-predictive cues. In contrast, valid cues reduced the proportion of omission errors for subjects trained in the predictive condition relative to subjects trained in the non-predictive condition. These results are congruent with those usually reported for humans and indicate that, in addition to the exogenous capture of attention promoted by both predictive and non-predictive peripheral cues, rats exposed to predictive cues engaged an additional slower process equivalent to human`s endogenous orienting of attention. To our knowledge, this is the first demonstration of an endogenous-like process of covert orienting of visual attention in rats.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We develop a job-market signaling model where signals may convey two pieces of information. This model is employed to study the GED exam and countersignaling (signals non-monotonic in ability). A result of the model is that countersignaling is more expected to occur in jobs that require a combination of skills that differs from the combination used in the schooling process. The model also produces testable implications consistent with evidence on the GED: (i) it signals both high cognitive and low non-cognitive skills and (ii) it does not affect wages. Additionally, it suggests modifications that would make the GED a more signal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent decades, changes have been occurring in the telecommunications industry, allied to competition driven by the policies of privatization and concessions, have fomented the world market irrefutably causing the emergence of a new reality. The reflections in Brazil have become evident due to the appearance of significant growth rates, getting in 2012 to provide a net operating income of 128 billion dollars, placing the country among the five major powers in the world in mobile communications. In this context, an issue of increasing importance to the financial health of companies is their ability to retain their customers, as well as turn them into loyal customers. The appearance of infidelity from customer operators has been generating monthly rates shutdowns about two to four percent per month accounting for business management one of its biggest challenges, since capturing a new customer has meant an expenditure greater than five times to retention. For this purpose, models have been developed by means of structural equation modeling to identify the relationships between the various determinants of customer loyalty in the context of services. The original contribution of this thesis is to develop a model for loyalty from the identification of relationships between determinants of satisfaction (latent variables) and the inclusion of attributes that determine the perceptions of service quality for the mobile communications industry, such as quality, satisfaction, value, trust, expectation and loyalty. It is a qualitative research which will be conducted with customers of operators through simple random sampling technique, using structured questionnaires. As a result, the proposed model and statistical evaluations should enable operators to conclude that customer loyalty is directly influenced by technical and operational quality of the services offered, as well as provide a satisfaction index for the mobile communication segment

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Raciocinando no contexto do programa neomecanicista para a Biologia, estudamos a natureza do processamento de informação no sistema vivo em geral, e no cérebro humano em particular, onde uma aplicação do modelo da Auto-Organização nos conduz à hipótese do Supercódigo. Este seria um programa mental, molecularmente codificado, responsável pelas competências inatas, como a competência lingüística. Fazemos também uma comparação entre nossa hipótese e a da Linguagem do Pensamento, proposta por Jerry Fodor.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays there is great interest in damage identification using non destructive tests. Predictive maintenance is one of the most important techniques that are based on analysis of vibrations and it consists basically of monitoring the condition of structures or machines. A complete procedure should be able to detect the damage, to foresee the probable time of occurrence and to diagnosis the type of fault in order to plan the maintenance operation in a convenient form and occasion. In practical problems, it is frequent the necessity of getting the solution of non linear equations. These processes have been studied for a long time due to its great utility. Among the methods, there are different approaches, as for instance numerical methods (classic), intelligent methods (artificial neural networks), evolutions methods (genetic algorithms), and others. The characterization of damages, for better agreement, can be classified by levels. A new one uses seven levels of classification: detect the existence of the damage; detect and locate the damage; detect, locate and quantify the damages; predict the equipment's working life; auto-diagnoses; control for auto structural repair; and system of simultaneous control and monitoring. The neural networks are computational models or systems for information processing that, in a general way, can be thought as a device black box that accepts an input and produces an output. Artificial neural nets (ANN) are based on the biological neural nets and possess habilities for identification of functions and classification of standards. In this paper a methodology for structural damages location is presented. This procedure can be divided on two phases. The first one uses norms of systems to localize the damage positions. The second one uses ANN to quantify the severity of the damage. The paper concludes with a numerical application in a beam like structure with five cases of structural damages with different levels of severities. The results show the applicability of the presented methodology. A great advantage is the possibility of to apply this approach for identification of simultaneous damages.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Molecular neurobiology has provided an explanation of mechanisms supporting mental functions as learning, memory, emotion and consciousness. However, an explanatory gap remains between two levels of description: molecular mechanisms determining cellular and tissue functions, and cognitive functions. In this paper we review molecular and cellular mechanisms that determine brain activity, and then hypothetize about their relation with cognition and consciousness. The brain is conceived of as a dynamic system that exchanges information with the whole body and the environment. Three explanatory hypotheses are presented, stating that: a) brain tissue function is coordinated by macromolecules controlling ion movements, b) structured (amplitude, frequency and phase-modulated) local field potentials generated by organized ionic movement embody cognitive information patterns, and c) conscious episodes are constructed by a large-scale mechanism that uses oscillatory synchrony to integrate local field patterns. © by São Paulo State University.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The post-processing of association rules is a difficult task, since a huge number of rules that are generated are of no interest to the user. To overcome this problem many approaches have been developed, such as objective measures and clustering. However, objective measures don't reduce nor organize the collection of rules, therefore making the understanding of the domain difficult. On the other hand, clustering doesn't reduce the exploration space nor direct the user to find interesting knowledge, therefore making the search for relevant knowledge not so easy. In this context this paper presents the PAR-COM methodology that, by combining clustering and objective measures, reduces the association rule exploration space directing the user to what is potentially interesting. An experimental study demonstrates the potential of PAR-COM to minimize the user's effort during the post-processing process. © 2012 Springer-Verlag.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Basic research is fundamental for discovering potential diagnostic and therapeutic tools, including drugs, vaccines and new diagnostic techniques. On this basis, diagnosis and treatment methods for many diseases have been developed. Presently, discovering new candidate molecules and testing them in animals are relatively easy tasks that require modest resources and responsibility. However, crossing the animal-to-human barrier is still a great challenge that most researchers tend to avoid. Thus, bridging this current gap between clinical and basic research must be encouraged and elucidated in training programmes for health professionals. This project clearly shows the challenges faced by a group of Brazilian researchers who, after discovering a new fibrin sealant through 20 years of painstaking basic work, insisted on having the product applied clinically. The Brazilian government has recently become aware of this challenge and has accordingly defined the product as strategic to the public health of the country. Thus, in addition to financing research and development laboratories, resources were invested in clinical trials and in the development of a virtual platform termed the Virtual System to Support Clinical Research (SAVPC); this platform imparts speed, reliability and visibility to advances in product development, fostering interactions among sponsors, physicians, students and, ultimately, the research subjects themselves. This pioneering project may become a future model for other public institutions in Brazil, principally in overcoming neglected diseases, which unfortunately continue to afflict this tropical country. © 2013 Elsevier Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike’s information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function.