877 resultados para Classify
Resumo:
An efficient numerical method to compute nonlinear solutions for two-dimensional steady free-surface flow over an arbitrary channel bottom topography is presented. The approach is based on a boundary integral equation technique which is similar to that of Vanden-Broeck's (1996, J. Fluid Mech., 330, 339-347). The typical approach for this problem is to prescribe the shape of the channel bottom topography, with the free-surface being provided as part of the solution. Here we take an inverse approach and prescribe the shape of the free-surface a priori while solving for the corresponding bottom topography. We show how this inverse approach is particularly useful when studying topographies that give rise to wave-free solutions, allowing us to easily classify eleven basic flow types. Finally, the inverse approach is also adapted to calculate a distribution of pressure on the free-surface, given the free-surface shape itself.
Resumo:
Bridges are currently rated individually for maintenance and repair action according to the structural conditions of their elements. Dealing with thousands of bridges and the many factors that cause deterioration, makes this rating process extremely complicated. The current simplified but practical methods are not accurate enough. On the other hand, the sophisticated, more accurate methods are only used for a single or particular bridge type. It is therefore necessary to develop a practical and accurate rating system for a network of bridges. The first most important step in achieving this aim is to classify bridges based on the differences in nature and the unique characteristics of the critical factors and the relationship between them, for a network of bridges. Critical factors and vulnerable elements will be identified and placed in different categories. This classification method will be used to develop a new practical rating method for a network of railway bridges based on criticality and vulnerability analysis. This rating system will be more accurate and economical as well as improve the safety and serviceability of railway bridges.
Resumo:
Visuals are a central feature of STEM in all levels of education and many areas of employment. The wide variety of visuals that students are expected to master in STEM prevents an approach that aims to teach students about every type of visual that they may encounter. This paper proposes a pedagogy that can be applied across year levels and learning areas, allowing a school-wide, cross-curricular, approach to teaching about visual, that enhances learning in STEM and all other learning areas. Visuals are classified into six categories based on their properties, unlike traditional methods that classify visuals according to purpose. As visuals in the same category share common properties, students are able to transfer their knowledge from the familiar to unfamiliar in each category. The paper details the classification and proposes some strategies that can be can be incorporated into existing methods of teaching students about visuals in all learning areas. The approach may also assist students to see the connections between the different learning areas within and outside STEM.
Resumo:
Highly sensitive infrared (IR) cameras provide high-resolution diagnostic images of the temperature and vascular changes of breasts. These images can be processed to emphasize hot spots that exhibit early and subtle changes owing to pathology. The resulting images show clusters that appear random in shape and spatial distribution but carry class dependent information in shape and texture. Automated pattern recognition techniques are challenged because of changes in location, size and orientation of these clusters. Higher order spectral invariant features provide robustness to such transformations and are suited for texture and shape dependent information extraction from noisy images. In this work, the effectiveness of bispectral invariant features in diagnostic classification of breast thermal images into malignant, benign and normal classes is evaluated and a phase-only variant of these features is proposed. High resolution IR images of breasts, captured with measuring accuracy of ±0.4% (full scale) and temperature resolution of 0.1 °C black body, depicting malignant, benign and normal pathologies are used in this study. Breast images are registered using their lower boundaries, automatically extracted using landmark points whose locations are learned during training. Boundaries are extracted using Canny edge detection and elimination of inner edges. Breast images are then segmented using fuzzy c-means clustering and the hottest regions are selected for feature extraction. Bispectral invariant features are extracted from Radon projections of these images. An Adaboost classifier is used to select and fuse the best features during training and then classify unseen test images into malignant, benign and normal classes. A data set comprising 9 malignant, 12 benign and 11 normal cases is used for evaluation of performance. Malignant cases are detected with 95% accuracy. A variant of the features using the normalized bispectrum, which discards all magnitude information, is shown to perform better for classification between benign and normal cases, with 83% accuracy compared to 66% for the original.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
The decentralisation reform in Indonesia has mandated the Central Government to transfer some functions and responsibilities to local governments including the transfer of human resources, assets and budgets. Local governments became giant asset holders almost overnight and most were ill prepared to handle these transformations. Assets were transferred without analysing local government need, ability or capability to manage the assets and no local government was provided with an asset management framework. Therefore, the aim of this research is to develop a Public Asset Management Framework for provincial governments in Indonesia, especially for infrastructure and real property assets. This framework will enable provincial governments to develop integrated asset management procedures throughout asset‘s lifecycle. Achieving the research aim means answering the following three research questions; 1) How do provincial governments in Indonesia currently manage their public assets? 2) What factors influence the provincial governments in managing these public assets? 3) How is a Public Asset Management Framework developed that is specific for the Indonesian provincial governments‘ situation? This research applied case studies approach after a literature review; document retrieval, interviews and observations were collated. Data was collected in June 2009 (preliminary data collection) and January to July 2010 in the major eastern Indonesian provinces. Once the public asset management framework was developed, a focus group was used to verify the framework. Results are threefold and indicate that Indonesian provincial governments need to improve the effectiveness and efficiency of current practice of public asset management in order to improve public service quality. The second result shows that the 5 major concerns that influence the local government public asset management processes are asset identification and inventory systems, public asset holding, asset guidance and legal arrangements, asset management efficiency and effectiveness, and, human resources and their organisational arrangements. The framework was then applied to assets already transferred to local governments and so included a system of asset identification and a needs analysis to classify the importance of these assets to local governments, their functions and responsibilities in delivering public services. Assets that support local government functions and responsibilities will then be managed using suitable asset lifecycle processes. Those categorised as surplus assets should be disposed. Additionally functions and responsibilities that do not need an asset solution should be performed directly by local governments. These processes must be measured using performance measurement indicators. All these stages should be guided and regulated with sufficient laws and regulations. Constant improvements to the quality and quantity of human resources hold an important role in successful public asset management processes. This research focuses on developing countries, and contributes toward the knowledge of a Public Asset Management Framework at local government level, particularly Indonesia. The framework provides local governments a foundation to improve their effectiveness and efficiency in managing public assets, which could lead to improved public service quality. This framework will ensure that the best decisions are made throughout asset decision ownership and provide a better asset life cycle process, leading to selection of the most appropriate asset, improve its acquisition and delivery process, optimise asset performance, and provide an appropriate disposal program.
Resumo:
In a classification problem typically we face two challenging issues, the diverse characteristic of negative documents and sometimes a lot of negative documents that are closed to positive documents. Therefore, it is hard for a single classifier to clearly classify incoming documents into classes. This paper proposes a novel gradual problem solving to create a two-stage classifier. The first stage identifies reliable negatives (negative documents with weak positive characteristics). It concentrates on minimizing the number of false negative documents (recall-oriented). We use Rocchio, an existing recall based classifier, for this stage. The second stage is a precision-oriented “fine tuning”, concentrates on minimizing the number of false positive documents by applying pattern (a statistical phrase) mining techniques. In this stage a pattern-based scoring is followed by threshold setting (thresholding). Experiment shows that our statistical phrase based two-stage classifier is promising.
Resumo:
A Neutral cluster and Air Ion Spectrometer (NAIS) was used to monitor the concentration of airborne ions on 258 full days between Nov 2011 and Dec 2012 in Brisbane, Australia. The air was sampled from outside a window on the sixth floor of a building close to the city centre, approximately 100 m away from a busy freeway. The NAIS detects all ions and charged particles smaller than 42 nm. It was operated in a 4 min measurement cycle, with ion data recorded at 10 s intervals over 2 min during each cycle. The data were analysed to derive the diurnal variation of small, large and total ion concentrations in the environment. We adapt the definition of Horrak et al (2000) and classify small ions as molecular clusters smaller than 1.6 nm and large ions as charged particles larger than this size...
Resumo:
Mathematical models of mosquito-borne pathogen transmission originated in the early twentieth century to provide insights into how to most effectively combat malaria. The foundations of the Ross–Macdonald theory were established by 1970. Since then, there has been a growing interest in reducing the public health burden of mosquito-borne pathogens and an expanding use of models to guide their control. To assess how theory has changed to confront evolving public health challenges, we compiled a bibliography of 325 publications from 1970 through 2010 that included at least one mathematical model of mosquito-borne pathogen transmission and then used a 79-part questionnaire to classify each of 388 associated models according to its biological assumptions. As a composite measure to interpret the multidimensional results of our survey, we assigned a numerical value to each model that measured its similarity to 15 core assumptions of the Ross–Macdonald model. Although the analysis illustrated a growing acknowledgement of geographical, ecological and epidemiological complexities in modelling transmission, most models during the past 40 years closely resemble the Ross–Macdonald model. Modern theory would benefit from an expansion around the concepts of heterogeneous mosquito biting, poorly mixed mosquito-host encounters, spatial heterogeneity and temporal variation in the transmission process.
Resumo:
The International Classification of Diseases, Version 10, Australian modification (ICD-10- AM) is commonly used to classify diseases in hospital patients. ICD-10-AM defines malnutrition as “BMI < 18.5 kg/m2 or unintentional weight loss of ≥ 5% with evidence of suboptimal intake resulting in subcutaneous fat loss and/or muscle wasting”. The Australasian Nutrition Care Day Survey (ANCDS) is the most comprehensive survey to evaluate malnutrition prevalence in acute care patients from Australian and New Zealand hospitals1. This study determined if malnourished participants were assigned malnutritionrelated codes as per ICD-10-AM. The ANCDS recruited acute care patients from 56 hospitals. Hospital-based dietitians evaluated participants’ nutritional status using BMI and Subjective Global Assessment (SGA). In keeping with the ICD-10-AM definition, malnutrition was defined as BMI <18.5kg/m2, SGA-B (moderately malnourished) or SGA-C (severely malnourished). After three months, in this prospective cohort study, hospitals’ health information/medical records department provided coding results for malnourished participants. Although malnutrition was prevalent in 32% (n= 993) of the cohort (N= 3122), a significantly small number were coded for malnutrition (n= 162, 16%, p<0.001). In 21 hospitals, none of the malnourished participants were coded. This is the largest study to provide a snapshot of malnutrition-coding in Australian and New Zealand hospitals. Findings highlight gaps in malnutrition documentation and/or subsequent coding, which could potentially result in significant loss of casemix-related revenue for hospitals. Dietitians must lead the way in developing structured processes for malnutrition identification, documentation and coding.
Resumo:
PURPOSE Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary care. METHODS Eleven nominal group interviews of patients and primary health care professionals were held in Auckland, New Zealand, during late 2007. Group members reported and helped to classify types of potential error by patients. We synthesized the ideas that emerged from the nominal groups into a taxonomy of patient error. RESULTS Our taxonomy is a 3-level system encompassing 70 potential types of patient error. The first level classifies 8 categories of error into 2 main groups: action errors and mental errors. The action errors, which result in part or whole from patient behavior, are attendance errors, assertion errors, and adherence errors. The mental errors, which are errors in patient thought processes, comprise memory errors, mindfulness errors, misjudgments, and—more distally—knowledge deficits and attitudes not conducive to health. CONCLUSION The taxonomy is an early attempt to understand and recognize how patients may err and what clinicians should aim to influence so they can help patients act safely. This approach begins to balance perspectives on error but requires further research. There is a need to move beyond seeing patient, clinician, and system errors as separate categories of error. An important next step may be research that attempts to understand how patients, clinicians, and systems interact to cocreate and reduce errors.
Resumo:
This dissertation seeks to define and classify potential forms of Nonlinear structure and explore the possibilities they afford for the creation of new musical works. It provides the first comprehensive framework for the discussion of Nonlinear structure in musical works and provides a detailed overview of the rise of nonlinearity in music during the 20th century. Nonlinear events are shown to emerge through significant parametrical discontinuity at the boundaries between regions of relatively strong internal cohesion. The dissertation situates Nonlinear structures in relation to linear structures and unstructured sonic phenomena and provides a means of evaluating Nonlinearity in a musical structure through the consideration of the degree to which the structure is integrated, contingent, compressible and determinate as a whole. It is proposed that Nonlinearity can be classified as a three dimensional space described by three continuums: the temporal continuum, encompassing sequential and multilinear forms of organization, the narrative continuum encompassing processual, game structure and developmental narrative forms and the referential continuum encompassing stylistic allusion, adaptation and quotation. The use of spectrograms of recorded musical works is proposed as a means of evaluating Nonlinearity in a musical work through the visual representation of parametrical divergence in pitch, duration, timbre and dynamic over time. Spectral and structural analysis of repertoire works is undertaken as part of an exploration of musical nonlinearity and the compositional and performative features that characterize it. The contribution of cultural, ideological, scientific and technological shifts to the emergence of Nonlinearity in music is discussed and a range of compositional factors that contributed to the emergence of musical Nonlinearity is examined. The evolution of notational innovations from the mobile score to the screen score is plotted and a novel framework for the discussion of these forms of musical transmission is proposed. A computer coordinated performative model is discussed, in which a computer synchronises screening of notational information, provides temporal coordination of the performers through click-tracks or similar methods and synchronises the audio processing and synthesized elements of the work. It is proposed that such a model constitutes a highly effective means of realizing complex Nonlinear structures. A creative folio comprising 29 original works that explore nonlinearity is presented, discussed and categorised utilising the proposed classifications. Spectrograms of these works are employed where appropriate to illustrate the instantiation of parametrically divergent substructures and examples of structural openness through multiple versioning.
Resumo:
In most intent recognition studies, annotations of query intent are created post hoc by external assessors who are not the searchers themselves. It is important for the field to get a better understanding of the quality of this process as an approximation for determining the searcher's actual intent. Some studies have investigated the reliability of the query intent annotation process by measuring the interassessor agreement. However, these studies did not measure the validity of the judgments, that is, to what extent the annotations match the searcher's actual intent. In this study, we asked both the searchers themselves and external assessors to classify queries using the same intent classification scheme. We show that of the seven dimensions in our intent classification scheme, four can reliably be used for query annotation. Of these four, only the annotations on the topic and spatial sensitivity dimension are valid when compared with the searcher's annotations. The difference between the interassessor agreement and the assessor-searcher agreement was significant on all dimensions, showing that the agreement between external assessors is not a good estimator of the validity of the intent classifications. Therefore, we encourage the research community to consider using query intent classifications by the searchers themselves as test data.
Resumo:
There is increasing momentum in cancer care to implement a two stage assessment process that accurately determines the ability of older patients to cope with, and benefit from, chemotherapy. The two-step approach aims to ensure that patients clearly fit for chemotherapy can be accurately identified and referred for treatment without undergoing a time- and resource-intensive comprehensive geriatric assessment (CGA). Ideally, this process removes the uncertainty of how to classify and then appropriately treat the older cancer patient. After trialling a two-stage screen and CGA process in the Division of Cancer Services at Princess Alexandra Hospital (PAH) in 2011-2012, we implemented a model of oncogeriatric care based on our findings. In this paper, we explore the methodological and practical aspects of implementing the PAH model and outline further work needed to refine the process in our treatment context.
Resumo:
Near-infrared spectroscopy (NIRS) calibrations were developed for the discrimination of Chinese hawthorn (Crataegus pinnatifida Bge. var. major) fruit from three geographical regions as well as for the estimation of the total sugar, total acid, total phenolic content, and total antioxidant activity. Principal component analysis (PCA) was used for the discrimination of the fruit on the basis of their geographical origin. Three pattern recognition methods, linear discriminant analysis, partial least-squares-discriminant analysis, and back-propagation artificial neural networks, were applied to classify and compare these samples. Furthermore, three multivariate calibration models based on the first derivative NIR spectroscopy, partial least-squares regression, back-propagation artificial neural networks, and least-squares-support vector machines, were constructed for quantitative analysis of the four analytes, total sugar, total acid, total phenolic content, and total antioxidant activity, and validated by prediction data sets.