302 resultados para Electrical impedance tomography, Calderon problem, factorization method
Resumo:
Rating systems are used by many websites, which allow customers to rate available items according to their own experience. Subsequently, reputation models are used to aggregate available ratings in order to generate reputation scores for items. A problem with current reputation models is that they provide solutions to enhance accuracy of sparse datasets not thinking of their models performance over dense datasets. In this paper, we propose a novel reputation model to generate more accurate reputation scores for items using any dataset; whether it is dense or sparse. Our proposed model is described as a weighted average method, where the weights are generated using the normal distribution. Experiments show promising results for the proposed model over state-of-the-art ones on sparse and dense datasets.
Resumo:
Many websites offer the opportunity for customers to rate items and then use customers' ratings to generate items reputation, which can be used later by other users for decision making purposes. The aggregated value of the ratings per item represents the reputation of this item. The accuracy of the reputation scores is important as it is used to rank items. Most of the aggregation methods didn't consider the frequency of distinct ratings and they didn't test how accurate their reputation scores over different datasets with different sparsity. In this work we propose a new aggregation method which can be described as a weighted average, where weights are generated using the normal distribution. The evaluation result shows that the proposed method outperforms state-of-the-art methods over different sparsity datasets.
Resumo:
Twitter is a very popular social network website that allows users to publish short posts called tweets. Users in Twitter can follow other users, called followees. A user can see the posts of his followees on his Twitter profile home page. An information overload problem arose, with the increase of the number of followees, related to the number of tweets available in the user page. Twitter, similar to other social network websites, attempts to elevate the tweets the user is expected to be interested in to increase overall user engagement. However, Twitter still uses the chronological order to rank the tweets. The tweets ranking problem was addressed in many current researches. A sub-problem of this problem is to rank the tweets for a single followee. In this paper we represent the tweets using several features and then we propose to use a weighted version of the famous voting system Borda-Count (BC) to combine several ranked lists into one. A gradient descent method and collaborative filtering method are employed to learn the optimal weights. We also employ the Baldwin voting system for blending features (or predictors). Finally we use the greedy feature selection algorithm to select the best combination of features to ensure the best results.
Resumo:
After attending this presentation, attendees will gain awareness of the ontogeny of cranial maturation, specifically: (1) the fusion timings of primary ossification centers in the basicranium; and (2) the temporal pattern of closure of the anterior fontanelle, to develop new population-specific age standards for medicolegal death investigation of Australian subadults. This presentation will impact the forensic science community by demonstrating the potential of a contemporary forensic subadult Computed Tomography (CT) database of cranial scans and population data, to recalibrate existing standards for age estimation and quantify growth and development of Australian children. This research welcomes a study design applicable to all countries faced with paucity in skeletal repositories. Accurate assessment of age-at-death of skeletal remains represents a key element in forensic anthropology methodology. In Australian casework, age standards derived from American reference samples are applied in light of scarcity in documented Australian skeletal collections. Currently practitioners rely on antiquated standards, such as the Scheuer and Black1 compilation for age estimation, despite implications of secular trends and population variation. Skeletal maturation standards are population specific and should not be extrapolated from one population to another, while secular changes in skeletal dimensions and accelerated maturation underscore the importance of establishing modern standards to estimate age in modern subadults. Despite CT imaging becoming the gold standard for skeletal analysis in Australia, practitioners caution the application of forensic age standards derived from macroscopic inspection to a CT medium, suggesting a need for revised methodologies. Multi-slice CT scans of subadult crania and cervical vertebrae 1 and 2 were acquired from 350 Australian individuals (males: n=193, females: n=157) aged birth to 12 years. The CT database, projected at 920 individuals upon completion (January 2014), comprises thin-slice DICOM data (resolution: 0.5/0.3mm) of patients scanned since 2010 at major Brisbane Childrens Hospitals. DICOM datasets were subject to manual segmentation, followed by the construction of multi-planar and volume rendering cranial models, for subsequent scoring. The union of primary ossification centers of the occipital bone were scored as open, partially closed or completely closed; while the fontanelles, and vertebrae were scored in accordance with two stages. Transition analysis was applied to elucidate age at transition between union states for each center, and robust age parameters established using Bayesian statistics. In comparison to reported literature, closure of the fontanelles and contiguous sutures in Australian infants occur earlier than reported, with the anterior fontanelle transitioning from open to closed at 16.7±1.1 months. The metopic suture is closed prior to 10 weeks post-partum and completely obliterated by 6 months of age, independent of sex. Utilizing reverse engineering capabilities, an alternate method for infant age estimation based on quantification of fontanelle area and non-linear regression with variance component modeling will be presented. Closure models indicate that the greatest rate of change in anterior fontanelle area occurs prior to 5 months of age. This study complements the work of Scheuer and Black1, providing more specific age intervals for union and temporal maturity of each primary ossification center of the occipital bone. For example, dominant fusion of the sutura intra-occipitalis posterior occurs before 9 months of age, followed by persistence of a hyaline cartilage tongue posterior to the foramen magnum until 2.5 years; with obliteration at 2.9±0.1 years. Recalibrated age parameters for the atlas and axis are presented, with the anterior arch of the atlas appearing at 2.9 months in females and 6.3 months in males; while dentoneural, dentocentral and neurocentral junctions of the axis transitioned from non-union to union at 2.1±0.1 years in females and 3.7±0.1 years in males. These results are an exemplar of significant sexual dimorphism in maturation (p<0.05), with girls exhibiting union earlier than boys, justifying the need for segregated sex standards for age estimation. Studies such as this are imperative for providing updated standards for Australian forensic and pediatric practice and provide an insight into skeletal development of this population. During this presentation, the utility of novel regression models for age estimation of infants will be discussed, with emphasis on three-dimensional modeling capabilities of complex structures such as fontanelles, for the development of new age estimation methods.
Resumo:
Determination of sequence similarity is a central issue in computational biology, a problem addressed primarily through BLAST, an alignment based heuristic which has underpinned much of the analysis and annotation of the genomic era. Despite their success, alignment-based approaches scale poorly with increasing data set size, and are not robust under structural sequence rearrangements. Successive waves of innovation in sequencing technologies – so-called Next Generation Sequencing (NGS) approaches – have led to an explosion in data availability, challenging existing methods and motivating novel approaches to sequence representation and similarity scoring, including adaptation of existing methods from other domains such as information retrieval. In this work, we investigate locality-sensitive hashing of sequences through binary document signatures, applying the method to a bacterial protein classification task. Here, the goal is to predict the gene family to which a given query protein belongs. Experiments carried out on a pair of small but biologically realistic datasets (the full protein repertoires of families of Chlamydia and Staphylococcus aureus genomes respectively) show that a measure of similarity obtained by locality sensitive hashing gives highly accurate results while offering a number of avenues which will lead to substantial performance improvements over BLAST..
Resumo:
In a tag-based recommender system, the multi-dimensional
A tag-based personalized item recommendation system using tensor modeling and topic model approaches
Resumo:
This research falls in the area of enhancing the quality of tag-based item recommendation systems. It aims to achieve this by employing a multi-dimensional user profile approach and by analyzing the semantic aspects of tags. Tag-based recommender systems have two characteristics that need to be carefully studied in order to build a reliable system. Firstly, the multi-dimensional correlation, called as tag assignment
Resumo:
In this paper, a method of thrust allocation based on a linearly constrained quadratic cost function capable of handling rotating azimuths is presented. The problem formulation accounts for magnitude and rate constraints on both thruster forces and azimuth angles. The advantage of this formulation is that the solution can be found with a finite number of iterations for each time step. Experiments with a model ship are used to validate the thrust allocation system.
Resumo:
Network Real-Time Kinematic (NRTK) is a technology that can provide centimeter-level accuracy positioning services in real time, and it is enabled by a network of Continuously Operating Reference Stations (CORS). The location-oriented CORS placement problem is an important problem in the design of a NRTK as it will directly affect not only the installation and operational cost of the NRTK, but also the quality of positioning services provided by the NRTK. This paper presents a Memetic Algorithm (MA) for the location-oriented CORS placement problem, which hybridizes the powerful explorative search capacity of a genetic algorithm and the efficient and effective exploitative search capacity of a local optimization. Experimental results have shown that the MA has better performance than existing approaches. In this paper we also conduct an empirical study about the scalability of the MA, effectiveness of the hybridization technique and selection of crossover operator in the MA.
Resumo:
Demand response can be used for providing regulation services in the electricity markets. The retailers can bid in a day-ahead market and respond to real-time regulation signal by load control. This paper proposes a new stochastic ranking method to provide regulation services via demand response. A pool of thermostatically controllable appliances (TCAs) such as air conditioners and water heaters are adjusted using direct load control method. The selection of appliances is based on a probabilistic ranking technique utilizing attributes such as temperature variation and statuses of TCAs. These attributes are stochastically forecasted for the next time step using day-ahead information. System performance is analyzed with a sample regulation signal. Network capability to provide regulation services under various seasons is analyzed. The effect of network size on the regulation services is also investigated.
Resumo:
Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.
Resumo:
Gaining invariance to camera and illumination variations has been a well investigated topic in Active Appearance Model (AAM) fitting literature. The major problem lies in the inability of the appearance parameters of the AAM to generalize to unseen conditions. An attractive approach for gaining invariance is to fit an AAM to a multiple filter response (e.g. Gabor) representation of the input image. Naively applying this concept with a traditional AAM is computationally prohibitive, especially as the number of filter responses increase. In this paper, we present a computationally efficient AAM fitting algorithm based on the Lucas-Kanade (LK) algorithm posed in the Fourier domain that affords invariance to both expression and illumination. We refer to this as a Fourier AAM (FAAM), and show that this method gives substantial improvement in person specific AAM fitting performance over traditional AAM fitting methods.
Resumo:
The problem of clustering a large document collection is not only challenged by the number of documents and the number of dimensions, but it is also affected by the number and sizes of the clusters. Traditional clustering methods fail to scale when they need to generate a large number of clusters. Furthermore, when the clusters size in the solution is heterogeneous, i.e. some of the clusters are large in size, the similarity measures tend to degrade. A ranking based clustering method is proposed to deal with these issues in the context of the Social Event Detection task. Ranking scores are used to select a small number of most relevant clusters in order to compare and place a document. Additionally,instead of conventional cluster centroids, cluster patches are proposed to represent clusters, that are hubs-like set of documents. Text, temporal, spatial and visual content information collected from the social event images is utilized in calculating similarity. Results show that these strategies allow us to have a balance between performance and accuracy of the clustering solution gained by the clustering method.
Resumo:
A tag-based item recommendation method generates an ordered list of items, likely interesting to a particular user, using the users past tagging behaviour. However, the users tagging behaviour varies in different tagging systems. A potential problem in generating quality recommendation is how to build user profiles, that interprets user behaviour to be effectively used, in recommendation models. Generally, the recommendation methods are made to work with specific types of user profiles, and may not work well with different datasets. In this paper, we investigate several tagging data interpretation and representation schemes that can lead to building an effective user profile. We discuss the various benefits a scheme brings to a recommendation method by highlighting the representative features of user tagging behaviours on a specific dataset. Empirical analysis shows that each interpretation scheme forms a distinct data representation which eventually affects the recommendation result. Results on various datasets show that an interpretation scheme should be selected based on the dominant usage in the tagging data (i.e. either higher amount of tags or higher amount of items present). The usage represents the characteristic of user tagging behaviour in the system. The results also demonstrate how the scheme is able to address the cold-start user problem.