11 resultados para automatic music analysis
em Aston University Research Archive
Resumo:
A novel approach to watermarking of audio signals using Independent Component Analysis (ICA) is proposed. It exploits the statistical independence of components obtained by practical ICA algorithms to provide a robust watermarking scheme with high information rate and low distortion. Numerical simulations have been performed on audio signals, showing good robustness of the watermark against common attacks with unnoticeable distortion, even for high information rates. An important aspect of the method is its domain independence: it can be used to hide information in other types of data, with minor technical adaptations.
Resumo:
Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions.
Resumo:
The present work describes the development of a proton induced X-ray emission (PIXE) analysis system, especially designed and builtfor routine quantitative multi-elemental analysis of a large number of samples. The historical and general developments of the analytical technique and the physical processes involved are discussed. The philosophy, design, constructional details and evaluation of a versatile vacuum chamber, an automatic multi-sample changer, an on-demand beam pulsing system and ion beam current monitoring facility are described.The system calibration using thin standard foils of Si, P, S,Cl, K, Ca, Ti, V, Fe, Cu, Ga, Ge, Rb, Y and Mo was undertaken at proton beam energies of 1 to 3 MeV in steps of 0.5 MeV energy and compared with theoretical calculations. An independent calibration check using bovine liver Standard Reference Material was performed. The minimum detectable limits have been experimentally determined at detector positions of 90° and 135° with respect to the incident beam for the above range of proton energies as a function of atomic number Z. The system has detection limits of typically 10- 7 to 10- 9 g for elements 14
Resumo:
Progressive addition spectacle lenses (PALs) have now become the method of choice for many presbyopic individuals to alleviate the visual problems of middle-age. Such lenses are difficult to assess and characterise because of their lack of discrete geographical locators of their key features. A review of the literature (mostly patents) describing the different designs of these lenses indicates the range of approaches to solving the visual problem of presbyopia. However, very little is published about the comparative optical performance of these lenses. A method is described here based on interferometry for the assessment of PALs, with a comparison of measurements made on an automatic focimeter. The relative merits of these techniques are discussed. Although the measurements are comparable, it is considered that the interferometry method is more readily automated, and would be ultimately capable of producing a more rapid result.
Resumo:
The primary objective of this research was to understand what kinds of knowledge and skills people use in `extracting' relevant information from text and to assess the extent to which expert systems techniques could be applied to automate the process of abstracting. The approach adopted in this thesis is based on research in cognitive science, information science, psycholinguistics and textlinguistics. The study addressed the significance of domain knowledge and heuristic rules by developing an information extraction system, called INFORMEX. This system, which was implemented partly in SPITBOL, and partly in PROLOG, used a set of heuristic rules to analyse five scientific papers of expository type, to interpret the content in relation to the key abstract elements and to extract a set of sentences recognised as relevant for abstracting purposes. The analysis of these extracts revealed that an adequate abstract could be generated. Furthermore, INFORMEX showed that a rule based system was a suitable computational model to represent experts' knowledge and strategies. This computational technique provided the basis for a new approach to the modelling of cognition. It showed how experts tackle the task of abstracting by integrating formal knowledge as well as experiential learning. This thesis demonstrated that empirical and theoretical knowledge can be effectively combined in expert systems technology to provide a valuable starting approach to automatic abstracting.
Resumo:
Summary writing is an important part of many English Language Examinations. As grading students' summary writings is a very time-consuming task, computer-assisted assessment will help teachers carry out the grading more effectively. Several techniques such as latent semantic analysis (LSA), n-gram co-occurrence and BLEU have been proposed to support automatic evaluation of summaries. However, their performance is not satisfactory for assessing summary writings. To improve the performance, this paper proposes an ensemble approach that integrates LSA and n-gram co-occurrence. As a result, the proposed ensemble approach is able to achieve high accuracy and improve the performance quite substantially compared with current techniques. A summary assessment system based on the proposed approach has also been developed.
Resumo:
Based on an assumption that a steady state exists in the full-memory multidestination automatic repeat request (ARQ) scheme, we propose a novel analytical method called steady-state function method (SSFM), to evaluate the performance of the scheme with any size of receiver buffer. For a wide range of system parameters, SSFM has higher accuracy on throughput estimation as compared to the conventional analytical methods.
Resumo:
Spamming has been a widespread problem for social networks. In recent years there is an increasing interest in the analysis of anti-spamming for microblogs, such as Twitter. In this paper we present a systematic research on the analysis of spamming in Sina Weibo platform, which is currently a dominant microblogging service provider in China. Our research objectives are to understand the specific spamming behaviors in Sina Weibo and find approaches to identify and block spammers in Sina Weibo based on spamming behavior classifiers. To start with the analysis of spamming behaviors we devise several effective methods to collect a large set of spammer samples, including uses of proactive honeypots and crawlers, keywords based searching and buying spammer samples directly from online merchants. We processed the database associated with these spammer samples and interestingly we found three representative spamming behaviors: Aggressive advertising, repeated duplicate reposting and aggressive following. We extract various features and compare the behaviors of spammers and legitimate users with regard to these features. It is found that spamming behaviors and normal behaviors have distinct characteristics. Based on these findings we design an automatic online spammer identification system. Through tests with real data it is demonstrated that the system can effectively detect the spamming behaviors and identify spammers in Sina Weibo.
Resumo:
OBJECTIVES: Pregnancy may provide a 'teachable moment' for positive health behaviour change, as a time when women are both motivated towards health and in regular contact with health care professionals. This study aimed to investigate whether women's experiences of pregnancy indicate that they would be receptive to behaviour change during this period. DESIGN: Qualitative interview study. METHODS: Using interpretative phenomenological analysis, this study details how seven women made decisions about their physical activity and dietary behaviour during their first pregnancy. RESULTS: Two women had required fertility treatment to conceive. Their behaviour was driven by anxiety and a drive to minimize potential risks to the pregnancy. This included detailed information seeking and strict adherence to diet and physical activity recommendations. However, the majority of women described behaviour change as 'automatic', adopting a new lifestyle immediately upon discovering their pregnancy. Diet and physical activity were influenced by what these women perceived to be normal or acceptable during pregnancy (largely based on observations of others) and internal drivers, including bodily signals and a desire to retain some of their pre-pregnancy self-identity. More reasoned assessments regarding benefits for them and their baby were less prevalent and influential. CONCLUSIONS: Findings suggest that for women who conceived relatively easily, diet and physical activity behaviour during pregnancy is primarily based upon a combination of automatic judgements, physical sensations, and perceptions of what pregnant women are supposed to do. Health professionals and other credible sources appear to exert less influence. As such, pregnancy alone may not create a 'teachable moment'. Statement of contribution What is already known on this subject? Significant life events can be cues to action with relation to health behaviour change. However, much of the empirical research in this area has focused on negative health experiences such as receiving a false-positive screening result and hospitalization, and in relation to unequivocally negative behaviours such as smoking. It is often suggested that pregnancy, as a major life event, is a 'teachable moment' (TM) for lifestyle behaviour change due to an increase in motivation towards health and regular contact with health professionals. However, there is limited evidence for the utility of the TM model in predicting or promoting behaviour change. What does this study add? Two groups of women emerged from our study: the women who had experienced difficulties in conceiving and had received fertility treatment, and those who had conceived without intervention. The former group's experience of pregnancy was characterized by a sense of vulnerability and anxiety over sustaining the pregnancy which influenced every choice they made about their diet and physical activity. For the latter group, decisions about diet and physical activity were made immediately upon discovering their pregnancy, based upon a combination of automatic judgements, physical sensations, and perceptions of what is normal or 'good' for pregnancy. Among women with relatively trouble-free conception and pregnancy experiences, the necessary conditions may not be present to create a 'teachable moment'. This is due to a combination of a reliance on non-reflective decision-making, perception of low risk, and little change in affective response or self-concept.
Resumo:
Queuing is a key efficiency criterion in any service industry, including Healthcare. Almost all queue management studies are dedicated to improving an existing Appointment System. In developing countries such as Pakistan, there are no Appointment Systems for outpatients, resulting in excessive wait times. Additionally, excessive overloading, limited resources and cumbersome procedures lead to over-whelming queues. Despite numerous Healthcare applications, Data Envelopment Analysis (DEA) has not been applied for queue assessment. The current study aims to extend DEA modelling and demonstrate its usefulness by evaluating the queue system of a busy public hospital in a developing country, Pakistan, where all outpatients are walk-in; along with construction of a dynamic framework dedicated towards the implementation of the model. The inadequate allocation of doctors/personnel was observed as the most critical issue for long queues. Hence, the Queuing-DEA model has been developed such that it determines the ‘required’ number of doctors/personnel. The results indicated that given extensive wait times or length of queue, or both, led to high target values for doctors/personnel. Hence, this crucial information allows the administrators to ensure optimal staff utilization and controlling the queue pre-emptively, minimizing wait times. The dynamic framework constructed, specifically targets practical implementation of the Queuing-DEA model in resource-poor public hospitals of developing countries such as Pakistan; to continuously monitor rapidly changing queue situation and display latest required personnel. Consequently, the wait times of subsequent patients can be minimized, along with dynamic staff scheduling in the absence of appointments. This dynamic framework has been designed in Excel, requiring minimal training and work for users and automatic update features, with complex technical aspects running in the background. The proposed model and the dynamic framework has the potential to be applied in similar public hospitals, even in other developing countries, where appointment systems for outpatients are non-existent.