991 resultados para Observed information
Resumo:
Existing theories of semantic cognition propose models of cognitive processing occurring in a conceptual space, where ‘meaning’ is derived from the spatial relationships between concepts’ mapped locations within the space. Information visualisation is a growing area of research within the field of information retrieval, and methods for presenting database contents visually in the form of spatial data management systems (SDMSs) are being developed. This thesis combined these two areas of research to investigate the benefits associated with employing spatial-semantic mapping (documents represented as objects in two- and three-dimensional virtual environments are proximally mapped dependent on the semantic similarity of their content) as a tool for improving retrieval performance and navigational efficiency when browsing for information within such systems. Positive effects associated with the quality of document mapping were observed; improved retrieval performance and browsing behaviour were witnessed when mapping was optimal. It was also shown using a third dimension for virtual environment (VE) presentation provides sufficient additional information regarding the semantic structure of the environment that performance is increased in comparison to using two-dimensions for mapping. A model that describes the relationship between retrieval performance and browsing behaviour was proposed on the basis of findings. Individual differences were not found to have any observable influence on retrieval performance or browsing behaviour when mapping quality was good. The findings from this work have implications for both cognitive modelling of semantic information, and for designing and testing information visualisation systems. These implications are discussed in the conclusions of this work.
Resumo:
This Thesis addresses the problem of automated false-positive free detection of epileptic events by the fusion of information extracted from simultaneously recorded electro-encephalographic (EEG) and the electrocardiographic (ECG) time-series. The approach relies on a biomedical case for the coupling of the Brain and Heart systems through the central autonomic network during temporal lobe epileptic events: neurovegetative manifestations associated with temporal lobe epileptic events consist of alterations to the cardiac rhythm. From a neurophysiological perspective, epileptic episodes are characterised by a loss of complexity of the state of the brain. The description of arrhythmias, from a probabilistic perspective, observed during temporal lobe epileptic events and the description of the complexity of the state of the brain, from an information theory perspective, are integrated in a fusion-of-information framework towards temporal lobe epileptic seizure detection. The main contributions of the Thesis include the introduction of a biomedical case for the coupling of the Brain and Heart systems during temporal lobe epileptic seizures, partially reported in the clinical literature; the investigation of measures for the characterisation of ictal events from the EEG time series towards their integration in a fusion-of-knowledge framework; the probabilistic description of arrhythmias observed during temporal lobe epileptic events towards their integration in a fusion-of-knowledge framework; and the investigation of the different levels of the fusion-of-information architecture at which to perform the combination of information extracted from the EEG and ECG time-series. The performance of the method designed in the Thesis for the false-positive free automated detection of epileptic events achieved a false-positives rate of zero on the dataset of long-term recordings used in the Thesis.
Resumo:
Background: Currently, no review has been completed regarding the information-gathering process for the provision of medicines for self-medication in community pharmacies in developing countries. Objective: To review the rate of information gathering and the types of information gathered when patients present for self-medication requests. Methods: Six databases were searched for studies that described the rate of information gathering and/or the types of information gathered in the provision of medicines for self-medication in community pharmacies in developing countries. The types of information reported were classified as: signs and symptoms, patient identity, action taken, medications, medical history, and others. Results: Twenty-two studies met the inclusion criteria. Variations in the study populations, types of scenarios, research methods, and data reporting were observed. The reported rate of information gathering varied from 18% to 97%, depending on the research methods used. Information on signs and symptoms and patient identity was more frequently reported to be gathered compared with information on action taken, medications, and medical history. Conclusion: Evidence showed that the information-gathering process for the provision of medicines for self-medication via community pharmacies in developing countries is inconsistent. There is a need to determine the barriers to appropriate information-gathering practice as well as to develop strategies to implement effective information-gathering processes. It is also recommended that international and national pharmacy organizations, including pharmacy academics and pharmacy researchers, develop a consensus on the types of information that should be reported in the original studies. This will facilitate comparison across studies so that areas that need improvement can be identified. © 2013 Elsevier Inc.
Resumo:
Computer modeling is a perspective method for optimal design of prosthesis and orthoses. The study is oriented to develop modular ankle foot orthosis (MAFO) to assist the very frequently observed gait abnormalities relating the human ankle-foot complex using CAD modeling. The main goal is to assist the ankle- foot flexors and extensors during the gait cycle (stance and swing) using torsion spring. Utilizing 3D modeling and animating open source software (Blender 3D), it is possible to generate artificially different kind of normal and abnormal gaits and investigate and adjust the assistive modular spring driven ankle foot orthosis.
Resumo:
DNA-binding proteins are crucial for various cellular processes and hence have become an important target for both basic research and drug development. With the avalanche of protein sequences generated in the postgenomic age, it is highly desired to establish an automated method for rapidly and accurately identifying DNA-binding proteins based on their sequence information alone. Owing to the fact that all biological species have developed beginning from a very limited number of ancestral species, it is important to take into account the evolutionary information in developing such a high-throughput tool. In view of this, a new predictor was proposed by incorporating the evolutionary information into the general form of pseudo amino acid composition via the top-n-gram approach. It was observed by comparing the new predictor with the existing methods via both jackknife test and independent data-set test that the new predictor outperformed its counterparts. It is anticipated that the new predictor may become a useful vehicle for identifying DNA-binding proteins. It has not escaped our notice that the novel approach to extract evolutionary information into the formulation of statistical samples can be used to identify many other protein attributes as well.
Resumo:
The focus of this thesis is the extension of topographic visualisation mappings to allow for the incorporation of uncertainty. Few visualisation algorithms in the literature are capable of mapping uncertain data with fewer able to represent observation uncertainties in visualisations. As such, modifications are made to NeuroScale, Locally Linear Embedding, Isomap and Laplacian Eigenmaps to incorporate uncertainty in the observation and visualisation spaces. The proposed mappings are then called Normally-distributed NeuroScale (N-NS), T-distributed NeuroScale (T-NS), Probabilistic LLE (PLLE), Probabilistic Isomap (PIso) and Probabilistic Weighted Neighbourhood Mapping (PWNM). These algorithms generate a probabilistic visualisation space with each latent visualised point transformed to a multivariate Gaussian or T-distribution, using a feed-forward RBF network. Two types of uncertainty are then characterised dependent on the data and mapping procedure. Data dependent uncertainty is the inherent observation uncertainty. Whereas, mapping uncertainty is defined by the Fisher Information of a visualised distribution. This indicates how well the data has been interpolated, offering a level of ‘surprise’ for each observation. These new probabilistic mappings are tested on three datasets of vectorial observations and three datasets of real world time series observations for anomaly detection. In order to visualise the time series data, a method for analysing observed signals and noise distributions, Residual Modelling, is introduced. The performance of the new algorithms on the tested datasets is compared qualitatively with the latent space generated by the Gaussian Process Latent Variable Model (GPLVM). A quantitative comparison using existing evaluation measures from the literature allows performance of each mapping function to be compared. Finally, the mapping uncertainty measure is combined with NeuroScale to build a deep learning classifier, the Cascading RBF. This new structure is tested on the MNist dataset achieving world record performance whilst avoiding the flaws seen in other Deep Learning Machines.
Resumo:
The purpose of this study was to test Lotka’s law of scientific publication productivity using the methodology outlined by Pao (1985), in the field of Library and Information Studies (LIS). Lotka’s law has been sporadically tested in the field over the past 30+ years, but the results of these studies are inconclusive due to the varying methods employed by the researchers. ^ A data set of 1,856 citations that were found using the ISI Web of Knowledge databases were studied. The values of n and c were calculated to be 2.1 and 0.6418 (64.18%) respectively. The Kolmogorov-Smirnov (K-S) one sample goodness-of-fit test was conducted at the 0.10 level of significance. The Dmax value is 0.022758 and the calculated critical value is 0.026562. It was determined that the null hypothesis stating that there is no difference in the observed distribution of publications and the distribution obtained using Lotka’s and Pao’s procedure could not be rejected. ^ This study finds that literature in the field of Library and Information Studies does conform to Lotka’s law with reliable results. As result, Lotka’s law can be used in LIS as a standardized means of measuring author publication productivity which will lead to findings that are comparable on many levels (e.g., department, institution, national). Lotka’s law can be employed as an empirically proven analytical tool to establish publication productivity benchmarks for faculty and faculty librarians. Recommendations for further study include (a) exploring the characteristics of the high and low producers; (b) finding a way to successfully account for collaborative contributions in the formula; and, (c) a detailed study of institutional policies concerning publication productivity and its impact on the appointment, tenure and promotion process of academic librarians. ^
Resumo:
Climate change is thought to be one of the most pressing environmental problems facing humanity. However, due in part to failures in political communication and how the issue has been historically defined in American politics, discussions of climate change remain gridlocked and polarized. In this dissertation, I explore how climate change has been historically constructed as a political issue, how conflicts between climate advocates and skeptics have been communicated, and what effects polarization has had on political communication, particularly on the communication of climate change to skeptical audiences. I use a variety of methodological tools to consider these questions, including evolutionary frame analysis, which uses textual data to show how issues are framed and constructed over time; Kullback-Leibler divergence content analysis, which allows for comparison of advocate and skeptical framing over time; and experimental framing methods to test how audiences react to and process different presentations of climate change. I identify six major portrayals of climate change from 1988 to 2012, but find that no single construction of the issue has dominated the public discourse defining the problem. In addition, the construction of climate change may be associated with changes in public political sentiment, such as greater pessimism about climate action when the electorate becomes more conservative. As the issue of climate change has become more polarized in American politics, one proposed causal pathway for the observed polarization is that advocate and skeptic framing of climate change focuses on different facets of the issue and ignores rival arguments, a practice known as “talking past.” However, I find no evidence of increased talking past in 25 years of popular newsmedia reporting on the issue, suggesting both that talking past has not driven public polarization or that polarization is occurring in venues outside of the mainstream public discourse, such as blogs. To examine how polarization affects political communication on climate change, I test the cognitive processing of a variety of messages and sources that promote action against climate change among Republican individuals. Rather than identifying frames that are powerful enough to overcome polarization, I find that Republicans exhibit telltale signs of motivated skepticism on the issue, that is, they reject framing that runs counter to their party line and political identity. This result suggests that polarization constrains political communication on polarized issues, overshadowing traditional message and source effects of framing and increasing the difficulty communicators experience in reaching skeptical audiences.
Resumo:
The purpose of this study was to test Lotka’s law of scientific publication productivity using the methodology outlined by Pao (1985), in the field of Library and Information Studies (LIS). Lotka’s law has been sporadically tested in the field over the past 30+ years, but the results of these studies are inconclusive due to the varying methods employed by the researchers. A data set of 1,856 citations that were found using the ISI Web of Knowledge databases were studied. The values of n and c were calculated to be 2.1 and 0.6418 (64.18%) respectively. The Kolmogorov-Smirnov (K-S) one sample goodness-of-fit test was conducted at the 0.10 level of significance. The Dmax value is 0.022758 and the calculated critical value is 0.026562. It was determined that the null hypothesis stating that there is no difference in the observed distribution of publications and the distribution obtained using Lotka’s and Pao’s procedure could not be rejected. This study finds that literature in the field of library and Information Studies does conform to Lotka’s law with reliable results. As result, Lotka’s law can be used in LIS as a standardized means of measuring author publication productivity which will lead to findings that are comparable on many levels (e.g., department, institution, national). Lotka’s law can be employed as an empirically proven analytical tool to establish publication productivity benchmarks for faculty and faculty librarians. Recommendations for further study include (a) exploring the characteristics of the high and low producers; (b) finding a way to successfully account for collaborative contributions in the formula; and, (c) a detailed study of institutional policies concerning publication productivity and its impact on the appointment, tenure and promotion process of academic librarians.
Resumo:
The business model of an organization is an important strategic tool for its success, and should therefore be understood by business professionals and information technology professionals. By this context and considering the importance of information technology in contemporary business models, this article aims to verify the use of the business model components in the information technology (IT) projects management process in enterprises. To achieve this goal, this exploratory research has investigated the use of the Business Model concept in the information technology projects management, by a survey applied to 327 professionals from February to April 2012. It was observed that the business model concept, as well as its practices or its blocks, are not so well explored in its whole potential, possibly because it is relatively new. One of the benefits of this conceptual tool is to provide an understanding in terms of the core business for different areas, enabling a higher level of knowledge in terms of the essential activities of the enterprise IT professionals and the business area.
Resumo:
An inference task in one in which some known set of information is used to produce an estimate about an unknown quantity. Existing theories of how humans make inferences include specialized heuristics that allow people to make these inferences in familiar environments quickly and without unnecessarily complex computation. Specialized heuristic processing may be unnecessary, however; other research suggests that the same patterns in judgment can be explained by existing patterns in encoding and retrieving memories. This dissertation compares and attempts to reconcile three alternate explanations of human inference. After justifying three hierarchical Bayesian version of existing inference models, the three models are com- pared on simulated, observed, and experimental data. The results suggest that the three models capture different patterns in human behavior but, based on posterior prediction using laboratory data, potentially ignore important determinants of the decision process.
Resumo:
This thesis examines the short-term impact of credit rating announcements on daily stock returns of 41 European banks indexed in STOXX Europe 600 Banks. The time period of this study is 2002–2015 and the ratings represent long-term issuer ratings provided by S&P, Moody’s and Fitch. Bank ratings are significant for a bank’s operation costs so it is interesting to investigate how investors react to changes in creditworthiness. The study objective is achieved by conducting an event study. The event study is extended with a cross-sectional linear regression to investigate other potential determinants surrounding rating changes. The research hypotheses and the motivation for additional tests are derived from prior research. The main hypotheses are formed to explore whether rating changes have an effect on stock returns, when this possible reaction occurs and whether it is asymmetric between upgrades and downgrades. The findings provide evidence that rating announcements have an impact on stock returns in the context of European banks. The results also support the existence of an asymmetry in capital market reaction to rating upgrades and downgrades. The rating downgrades are associated with statistically significant negative abnormal returns on the event day although the reaction is rather modest. No statistically significant reaction is found associated with the rating upgrades on the event day. These results hold true with both rating changes and rating watches. No anticipation is observed in the case of rating changes but there is a statistically significant cumulative negative (positive) price reaction occurring before the event day for negative (positive) watch announcements. The regression provides evidence that the stock price reaction is stronger for rating downgrades occurring within below investment grade class compared with investment grade class. This is intuitive as investors are more concerned about their investments in lower-rated companies. Besides, the price reaction of larger banks is more mitigated compared with smaller banks in the case of rating downgrades. The reason for this may be that larger banks are usually more widely followed by the public. However, the study results may also provide evidence of the existence of the so-called “too big to fail” subsidy that dampens the negative returns of larger banks.
Resumo:
Dissertação de Mestrado, Biologia Marinha, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2016
Resumo:
Des interventions ciblant l’amélioration cognitive sont de plus en plus à l’intérêt dans nombreux domaines, y compris la neuropsychologie. Bien qu'il existe de nombreuses méthodes pour maximiser le potentiel cognitif de quelqu’un, ils sont rarement appuyé par la recherche scientifique. D’abord, ce mémoire examine brièvement l'état des interventions d'amélioration cognitives. Il décrit premièrement les faiblesses observées dans ces pratiques et par conséquent il établit un modèle standard contre lequel on pourrait et devrait évaluer les diverses techniques ciblant l'amélioration cognitive. Une étude de recherche est ensuite présenté qui considère un nouvel outil de l'amélioration cognitive, une tâche d’entrainement perceptivo-cognitive : 3-dimensional multiple object tracking (3D-MOT). Il examine les preuves actuelles pour le 3D-MOT auprès du modèle standard proposé. Les résultats de ce projet démontrent de l’augmentation dans les capacités d’attention, de mémoire de travail visuel et de vitesse de traitement d’information. Cette étude représente la première étape dans la démarche vers l’établissement du 3D-MOT comme un outil d’amélioration cognitive.
Resumo:
On most if not all evaluatively relevant dimensions such as the temperature level, taste intensity, and nutritional value of a meal, one range of adequate, positive states is framed by two ranges of inadequate, negative states, namely too much and too little. This distribution of positive and negative states in the information ecology results in a higher similarity of positive objects, people, and events to other positive stimuli as compared to the similarity of negative stimuli to other negative stimuli. In other words, there are fewer ways in which an object, a person, or an event can be positive as compared to negative. Oftentimes, there is only one way in which a stimulus can be positive (e.g., a good meal has to have an adequate temperature level, taste intensity, and nutritional value). In contrast, there are many different ways in which a stimulus can be negative (e.g., a bad meal can be too hot or too cold, too spicy or too bland, or too fat or too lean). This higher similarity of positive as compared to negative stimuli is important, as similarity greatly impacts speed and accuracy on virtually all levels of information processing, including attention, classification, categorization, judgment and decision making, and recognition and recall memory. Thus, if the difference in similarity between positive and negative stimuli is a general phenomenon, it predicts and may explain a variety of valence asymmetries in cognitive processing (e.g., positive as compared to negative stimuli are processed faster but less accurately). In my dissertation, I show that the similarity asymmetry is indeed a general phenomenon that is observed in thousands of words and pictures. Further, I show that the similarity asymmetry applies to social groups. Groups stereotyped as average on the two dimensions agency / socio-economic success (A) and conservative-progressive beliefs (B) are stereotyped as positive or high on communion (C), while groups stereotyped as extreme on A and B (e.g., managers, homeless people, punks, and religious people) are stereotyped as negative or low on C. As average groups are more similar to one another than extreme groups, according to this ABC model of group stereotypes, positive groups are mentally represented as more similar to one another than negative groups. Finally, I discuss implications of the ABC model of group stereotypes, pointing to avenues for future research on how stereotype content shapes social perception, cognition, and behavior.