980 resultados para Link prediction
Resumo:
Although most of the research on Cognitive Radio is focused on communication bands above the HF upper limit (30 MHz), Cognitive Radio principles can also be applied to HF communications to make use of the extremely scarce spectrum more efficiently. In this work we consider legacy users as primary users since these users transmit without resorting to any smart procedure, and our stations using the HFDVL (HF Data+Voice Link) architecture as secondary users. Our goal is to enhance an efficient use of the HF band by detecting the presence of uncoordinated primary users and avoiding collisions with them while transmitting in different HF channels using our broad-band HF transceiver. A model of the primary user activity dynamics in the HF band is developed in this work to make short-term predictions of the sojourn time of a primary user in the band and avoid collisions. It is based on Hidden Markov Models (HMM) which are a powerful tool for modelling stochastic random processes and are trained with real measurements of the 14 MHz band. By using the proposed HMM based model, the prediction model achieves an average 10.3% prediction error rate with one minute-long channel knowledge but it can be reduced when this knowledge is extended: with the previous 8 min knowledge, an average 5.8% prediction error rate is achieved. These results suggest that the resulting activity model for the HF band could actually be used to predict primary users activity and included in a future HF cognitive radio based station.
Resumo:
An innovative background modeling technique that is able to accurately segment foreground regions in RGB-D imagery (RGB plus depth) has been presented in this paper. The technique is based on a Bayesian framework that efficiently fuses different sources of information to segment the foreground. In particular, the final segmentation is obtained by considering a prediction of the foreground regions, carried out by a novel Bayesian Network with a depth-based dynamic model, and, by considering two independent depth and color-based mixture of Gaussians background models. The efficient Bayesian combination of all these data reduces the noise and uncertainties introduced by the color and depth features and the corresponding models. As a result, more compact segmentations, and refined foreground object silhouettes are obtained. Experimental results with different databases suggest that the proposed technique outperforms existing state-of-the-art algorithms.
Resumo:
Clinicians could model the brain injury of a patient through his brain activity. However, how this model is defined and how it changes when the patient is recovering are questions yet unanswered. In this paper, the use of MedVir framework is proposed with the aim of answering these questions. Based on complex data mining techniques, this provides not only the differentiation between TBI patients and control subjects (with a 72% of accuracy using 0.632 Bootstrap validation), but also the ability to detect whether a patient may recover or not, and all of that in a quick and easy way through a visualization technique which allows interaction.
Resumo:
Species distribution models (SDM) predict species occurrence based on statistical relationships with environmental conditions. The R-package biomod2 which includes 10 different SDM techniques and 10 different evaluation methods was used in this study. Macroalgae are the main biomass producers in Potter Cove, King George Island (Isla 25 de Mayo), Antarctica, and they are sensitive to climate change factors such as suspended particulate matter (SPM). Macroalgae presence and absence data were used to test SDMs suitability and, simultaneously, to assess the environmental response of macroalgae as well as to model four scenarios of distribution shifts by varying SPM conditions due to climate change. According to the averaged evaluation scores of Relative Operating Characteristics (ROC) and True scale statistics (TSS) by models, those methods based on a multitude of decision trees such as Random Forest and Classification Tree Analysis, reached the highest predictive power followed by generalized boosted models (GBM) and maximum-entropy approaches (Maxent). The final ensemble model used 135 of 200 calculated models (TSS > 0.7) and identified hard substrate and SPM as the most influencing parameters followed by distance to glacier, total organic carbon (TOC), bathymetry and slope. The climate change scenarios show an invasive reaction of the macroalgae in case of less SPM and a retreat of the macroalgae in case of higher assumed SPM values.
Resumo:
This investigation seeks to explore the hypothesis, derived from observation and practice, that there is a strong relationship between the development of literacy skills and the growth of confidence in adult literacy students. Implicit in the developmental approach is the notion of progression towards some cognitive goal. Such a goal necessitates the establishment of a base line of existing attainment, together with subsequent assessment so that progress and development can be measured. The study includes an evaluation of existing formal and informal methods of initial and subsequent assessment and diagnosis available at the time for Adult Literacy Scheme Co-ordinators. Underlying the funding by Cheshire County Council for the project is the assumption that the results will be available for all practitioners and that the tools of measurement may be used by other Adult Literacy Co-ordinators in the County. It is intended, therefore, that this research should result in practical outcomes in which methods of assessment will involve active participation by students as well as by tutors, becoming part of the learning process. It is hypothesised that this kind of co-operation could lead ultimately to self-directed learning and student-independence. For the purposes of this research, a balance is attempted in the use of assessment tools, between standardised tests and informal methods. The study provides facts about students! reading habits; as well as their reading levels, their spelling levels, their handwriting, their writing skills and their writing habits. The study seeks to show the students' feelings towards education, their educational attainments and the type of school which they attended. The study also attempts to come to some measurement of those aspects of student personality which relate to confidence, by means of tests and questionnaires. The study concludes with an examination of the link between cognitive and affective progress.
Resumo:
Computer based discrete event simulation (DES) is one of the most commonly used aids for the design of automotive manufacturing systems. However, DES tools represent machines in extensive detail, while only representing workers as simple resources. This presents a problem when modelling systems with a highly manual work content, such as an assembly line. This paper describes research at Cranfield University, in collaboration with the Ford Motor Company, founded on the assumption that human variation is the cause of a large percentage of the disparity between simulation predictions and real world performance. The research aims to improve the accuracy and reliability of simulation prediction by including models of human factors.
Resumo:
Purpose: Phonological accounts of reading implicate three aspects of phonological awareness tasks that underlie the relationship with reading; a) the language-based nature of the stimuli (words or nonwords), b) the verbal nature of the response, and c) the complexity of the stimuli (words can be segmented into units of speech). Yet, it is uncertain which task characteristics are most important as they are typically confounded. By systematically varying response-type and stimulus complexity across speech and non-speech stimuli, the current study seeks to isolate the characteristics of phonological awareness tasks that drive the prediction of early reading. Method: Four sets of tasks were created; tone stimuli (simple non-speech) requiring a non-verbal response, phonemes (simple speech) requiring a non-verbal response, phonemes requiring a verbal response, and nonwords (complex speech) requiring a verbal response. Tasks were administered to 570 2nd grade children along with standardized tests of reading and non-verbal IQ. Results: Three structural equation models comparing matched sets of tasks were built. Each model consisted of two 'task' factors with a direct link to a reading factor. The following factors predicted unique variance in reading: a) simple speech and non-speech stimuli, b) simple speech requiring a verbal response but not simple speech requiring a non-verbal-response, and c) complex and simple speech stimuli. Conclusions: Results suggest that the prediction of reading by phonological tasks is driven by the verbal nature of the response and not the complexity or 'speechness' of the stimuli. Findings highlight the importance of phonological output processes to early reading.
Resumo:
Learning user interests from online social networks helps to better understand user behaviors and provides useful guidance to design user-centric applications. Apart from analyzing users' online content, it is also important to consider users' social connections in the social Web. Graph regularization methods have been widely used in various text mining tasks, which can leverage the graph structure information extracted from data. Previously, graph regularization methods operate under the cluster assumption that nearby nodes are more similar and nodes on the same structure (typically referred to as a cluster or a manifold) are likely to be similar. We argue that learning user interests from complex, sparse, and dynamic social networks should be based on the link structure assumption under which node similarities are evaluated based on the local link structures instead of explicit links between two nodes. We propose a regularization framework based on the relation bipartite graph, which can be constructed from any type of relations. Using Twitter as our case study, we evaluate our proposed framework from social networks built from retweet relations. Both quantitative and qualitative experiments show that our proposed method outperforms a few competitive baselines in learning user interests over a set of predefined topics. It also gives superior results compared to the baselines on retweet prediction and topical authority identification. © 2014 ACM.
Resumo:
C.-W.W. is supported by a studentship funded by the College of Physical Sciences, University of Aberdeen. M.S.B. acknowledges EPSRC grant NO. EP/I032606/1.
Resumo:
The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This concept allows one to include the feedback of regional land use information on weather and climate at local and global scales in a consistent way, which is impossible to achieve with traditional limited area modelling approaches. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different high-performance computing (HPC) sites with different architectures and compilers. We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes. We further demonstrate the model performance of MPAS in terms of its capability to reproduce the dynamics of the West African monsoon (WAM) and its associated precipitation in a pilot study. Constrained by available computational resources, we compare 11-month runs for two meshes with observations and a reference simulation from the Weather Research and Forecasting (WRF) model. We show that MPAS can reproduce the atmospheric dynamics on global and local scales in this experiment, but identify a precipitation excess for the West African region. Finally, we conduct extreme scaling tests on a global 3?km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70?% parallel efficiency or better) of the MPAS model and provide numbers on the computational requirements for experiments with the 3?km mesh. In doing so, we show that global, convection-resolving atmospheric simulations with MPAS are within reach of current and next generations of high-end computing facilities.
Resumo:
The length of stay of preterm infants in a neonatology service has become an issue of a growing concern, namely considering, on the one hand, the mothers and infants health conditions and, on the other hand, the scarce healthcare facilities own resources. Thus, a pro-active strategy for problem solving has to be put in place, either to improve the quality-of-service provided or to reduce the inherent financial costs. Therefore, this work will focus on the development of a diagnosis decision support system in terms of a formal agenda built on a Logic Programming approach to knowledge representation and reasoning, complemented with a case-based problem solving methodology to computing, that caters for the handling of incomplete, unknown, or even contradictory in-formation. The proposed model has been quite accurate in predicting the length of stay (overall accuracy of 84.9%) and by reducing the computational time with values around 21.3%.
Resumo:
Waiting time at an intensive care unity stands for a key feature in the assessment of healthcare quality. Nevertheless, its estimation is a difficult task, not only due to the different factors with intricate relations among them, but also with respect to the available data, which may be incomplete, self-contradictory or even unknown. However, its prediction not only improves the patients’ satisfaction but also enhance the quality of the healthcare being provided. To fulfill this goal, this work aims at the development of a decision support system that allows one to predict how long a patient should remain at an emergency unit, having into consideration all the remarks that were just stated above. It is built on top of a Logic Programming approach to knowledge representation and reasoning, complemented with a Case Base approach to computing.