962 resultados para Classification de types de pieds
Resumo:
Norms regulate the behaviour of their subjects and define what is legal and what is illegal. Norms typically describe the conditions under which they are applicable and the normative effects as a results of their applications. On the other hand, process models specify how a business operation or service is to be carried out to achieve a desired outcome. Norms can have significant impact on how business operations are conducted and they can apply to the whole or part of a business process. For example, they may impose conditions on the different aspects of a process (e.g., perform tasks in a specific sequence (control-flow), at a specific time or within a certain time frame (temporal aspect), by specific people (resources)). We propose a framework that provides the formal semantics of the normative requirements for determining whether a business process complies with a normative document (where a normative document can be understood in a very broad sense, ranging from internal policies to best practice policies, to statutory acts). We also present a classification of normal requirements based on the notion of different types of obligations and the effects of violating these obligations.
Resumo:
Next Generation Sequencing (NGS) has revolutionised molec- ular biology, allowing routine clinical sequencing. NGS data consists of short sequence reads, given context through downstream assembly and annotation, a process requiring reads consistent with the assumed species or species group. The common bacterium Staphylococcus aureus may cause severe and life-threatening infections in humans, with some strains exhibiting antibiotic resistance. Here we apply an SVM classifier to the important problem of distinguishing S. aureus sequencing projects from other pathogens, including closely related Staphylococci. Using a sequence k-mer representation, we achieve precision and recall above 95%, implicating features with important functional associations.
Resumo:
Time plays an important role in norms. In this paper we start from our previously proposed classification of obligations, and point out some shortcomings of Event Calculus (EC) to represent obligations. We proposed an extension of EC that avoids such shortcomings and we show how to use it to model the various types of obligations.
Resumo:
Cardiomyopathies represent a group of diseases of the myocardium of the heart and include diseases both primarily of the cardiac muscle and systemic diseases leading to adverse effects on the heart muscle size, shape, and function. Traditionally cardiomyopathies were defined according to phenotypical appearance. Now, as our understanding of the pathophysiology of the different entities classified under each of the different phenotypes improves and our knowledge of the molecular and genetic basis for these entities progresses, the traditional classifications seem oversimplistic and do not reflect current understanding of this myriad of diseases and disease processes. Although our knowledge of the exact basis of many of the disease processes of cardiomyopathies is still in its infancy, it is important to have a classification system that has the ability to incorporate the coming tide of molecular and genetic information. This paper discusses how the traditional classification of cardiomyopathies based on morphology has evolved due to rapid advances in our understanding of the genetic and molecular basis for many of these clinical entities.
Resumo:
Highly sensitive infrared cameras can produce high-resolution diagnostic images of the temperature and vascular changes of breasts. Wavelet transform based features are suitable in extracting the texture difference information of these images due to their scale-space decomposition. The objective of this study is to investigate the potential of extracted features in differentiating between breast lesions by comparing the two corresponding pectoral regions of two breast thermograms. The pectoral regions of breastsare important because near 50% of all breast cancer is located in this region. In this study, the pectoral region of the left breast is selected. Then the corresponding pectoral region of the right breast is identified. Texture features based on the first and the second sets of statistics are extracted from wavelet decomposed images of the pectoral regions of two breast thermograms. Principal component analysis is used to reduce dimension and an Adaboost classifier to evaluate classification performance. A number of different wavelet features are compared and it is shown that complex non-separable 2D discrete wavelet transform features perform better than their real separable counterparts.
Resumo:
Numbers, rates and proportions of those remanded in custody have increased significantly in recent decades across a range of jurisdictions. In Australia they have doubled since the early 1980s, such that close to one in four prisoners is currently unconvicted. Taking NSW as a case study and drawing on the recent New South Wales Law Reform Commission Report on Bail (2012), this article will identify the key drivers of this increase in NSW, predominantly a form of legislative hyperactivity involving constant changes to the Bail Act 1978 (NSW), changes which remove or restrict the presumption in favour of bail for a wide range of offences. The article will then examine some of the conceptual, cultural and practice shifts underlying the increase. These include: a shift away from a conception of bail as a procedural issue predominantly concerned with securing the attendance of the accused at trial and the integrity of the trial, to the use of bail for crime prevention purposes; the diminishing force of the presumption of innocence; the framing of a false opposition between an individual interest in liberty and a public interest in safety; a shift from determination of the individual case by reference to its own particular circumstances to determination by its classification within pre‐set legislative categories of offence types and previous convictions; a double jeopardy effect arising in relation to people with previous convictions for which they have already been punished; and an unacknowledged preventive detention effect arising from the increased emphasis on risk. Many of these conceptual shifts are apparent in the explosion in bail conditions and the KPI‐driven policing of bail conditions and consequent rise in revocations, especially in relation to juveniles. The paper will conclude with a note on the NSW Government’s response to the NSW LRC Report in the form of a Bail Bill (2013) and brief speculation as to its likely effects.
Resumo:
This research has been conducted to ascertain the validity of existing videogame reward categorisations. An overview of current videogame reward types is provided and the need for further research in the area of videogame reward systems is identified. Possible limitations of the primary existing reward taxonomy are identified. We propose a definition of videogame rewards and present initial findings on a partially validated videogame reward taxonomy. Future games and gamified applications stand to benefit from a categorisation of videogame rewards, as videogame rewards play a pivotal role in player motivation.
Resumo:
Internationally, transit oriented development (TOD) is characterised by moderate to high density development with diverse land use patterns and well connected street networks centred around high frequency transit stops (bus and rail). Although different TOD typologies have been developed in different contexts, they are based on subjective evaluation criteria derived from the context in which they are built and typically lack a validation measure. Arguably there exist sets of TOD characteristics that perform better in certain contexts, and being able to optimise TOD effectiveness would facilitate planning and supporting policy development. This research utilises data from census collection districts (CCDs) in Brisbane with different sets of TOD attributes measured across six objectively quantified built environmental indicators: net employment density, net residential density, land use diversity, intersection density, cul-de-sac density, and public transport accessibility. Using these measures, a Two Step Cluster Analysis was conducted to identify natural groupings of the CCDs with similar profiles, resulting in four unique TOD clusters: (a) residential TODs, (b) activity centre TODs, (c) potential TODs, and; (d) TOD non-suitability. The typologies are validated by estimating a multinomial logistic regression model in order to understand the mode choice behaviour of 10,013 individuals living in these areas. Results indicate that in comparison to people living in areas classified as residential TODs, people who reside in non-TOD clusters were significantly less likely to use public transport (PT) (1.4 times), and active transport (4 times) compared to the car. People living in areas classified as potential TODs were 1.3 times less likely to use PT, and 2.5 times less likely to use active transport compared to using the car. Only a little difference in mode choice behaviour was evident between people living in areas classified as residential TODs and activity centre TODs. The results suggest that: (a) two types of TODs may be suitable for classification and effect mode choice in Brisbane; (b) TOD typology should be developed based on their TOD profile and performance matrices; (c) both bus stop and train station based TODs are suitable for development in Brisbane.
Resumo:
Microvessel density (MVD) is a widely used surrogate measure of angiogenesis in pathological specimens and tumour models. Measurement of MVD can be achieved by several methods. Automation of counting methods aims to increase the speed, reliability and reproducibility of these techniques. The image analysis system described here enables MVD measurement to be carried out with minimal expense in any reasonably equipped pathology department or laboratory. It is demonstrated that the system translates easily between tumour types which are suitably stained with minimal calibration. The aim of this paper is to offer this technique to a wider field of researchers in angiogenesis.
Resumo:
Background The expansion of cell colonies is driven by a delicate balance of several mechanisms including cell motility, cell-to-cell adhesion and cell proliferation. New approaches that can be used to independently identify and quantify the role of each mechanism will help us understand how each mechanism contributes to the expansion process. Standard mathematical modelling approaches to describe such cell colony expansion typically neglect cell-to-cell adhesion, despite the fact that cell-to-cell adhesion is thought to play an important role. Results We use a combined experimental and mathematical modelling approach to determine the cell diffusivity, D, cell-to-cell adhesion strength, q, and cell proliferation rate, ?, in an expanding colony of MM127 melanoma cells. Using a circular barrier assay, we extract several types of experimental data and use a mathematical model to independently estimate D, q and ?. In our first set of experiments, we suppress cell proliferation and analyse three different types of data to estimate D and q. We find that standard types of data, such as the area enclosed by the leading edge of the expanding colony and more detailed cell density profiles throughout the expanding colony, does not provide sufficient information to uniquely identify D and q. We find that additional data relating to the degree of cell-to-cell clustering is required to provide independent estimates of q, and in turn D. In our second set of experiments, where proliferation is not suppressed, we use data describing temporal changes in cell density to determine the cell proliferation rate. In summary, we find that our experiments are best described using the range D = 161 - 243 ?m2 hour-1, q = 0.3 - 0.5 (low to moderate strength) and ? = 0.0305 - 0.0398 hour-1, and with these parameters we can accurately predict the temporal variations in the spatial extent and cell density profile throughout the expanding melanoma cell colony. Conclusions Our systematic approach to identify the cell diffusivity, cell-to-cell adhesion strength and cell proliferation rate highlights the importance of integrating multiple types of data to accurately quantify the factors influencing the spatial expansion of melanoma cell colonies.
Resumo:
Development of design guides to estimate the difference in speech interference level due to road traffic noise between a reference position and balcony position or façade position is explored. A previously established and validated theoretical model incorporating direct, specular and diffuse reflection paths is used to create a database of results across a large number of scenarios. Nine balcony types with variable acoustic treatments are assessed to provide acoustic design guidance on optimised selection of balcony acoustic treatments based on location and street type. In total, the results database contains 9720 scenarios on which multivariate linear regression is conducted in order to derive an appropriate design guide equation. The best fit regression derived is a multivariable linear equation including modified exponential equations on each of nine deciding variables, (1) diffraction path difference, (2) ratio of total specular energy to direct energy, (3) distance loss between reference position and receiver position, (4) distance from source to balcony façade, (5) height of balcony floor above street, (6) balcony depth, (7) height of opposite buildings, (8) diffusion coefficient of buildings, and; (9) balcony average absorption. Overall, the regression correlation coefficient, R2, is 0.89 with 95% confidence standard error of ±3.4 dB.
Resumo:
Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.
Resumo:
The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.