20 resultados para Hierarchical partitioning analysis
em Aston University Research Archive
Resumo:
The thesis describes an investigation into methods for the specification, design and implementation of computer control systems for flexible manufacturing machines comprising multiple, independent, electromechanically-driven mechanisms. An analysis is made of the elements of conventional mechanically-coupled machines in order that the operational functions of these elements may be identified. This analysis is used to define the scope of requirements necessary to specify the format, function and operation of a flexible, independently driven mechanism machine. A discussion of how this type of machine can accommodate modern manufacturing needs of high-speed and flexibility is presented. A sequential method of capturing requirements for such machines is detailed based on a hierarchical partitioning of machine requirements from product to independent drive mechanism. A classification of mechanisms using notations, including Data flow diagrams and Petri-nets, is described which supports capture and allows validation of requirements. A generic design for a modular, IDM machine controller is derived based upon hierarchy of control identified in these machines. A two mechanism experimental machine is detailed which is used to demonstrate the application of the specification, design and implementation techniques. A computer controller prototype and a fully flexible implementation for the IDM machine, based on Petri-net models described using the concurrent programming language Occam, is detailed. The ability of this modular computer controller to support flexible, safe and fault-tolerant operation of the two intermittent motion, discrete-synchronisation independent drive mechanisms is presented. The application of the machine development methodology to industrial projects is established.
Resumo:
The concept of a task is fundamental to the discipline of ergonomics. Approaches to the analysis of tasks began in the early 1900's. These approaches have evolved and developed to the present day, when there is a vast array of methods available. Some of these methods are specific to particular contexts or applications, others more general. However, whilst many of these analyses allow tasks to be examined in detail, they do not act as tools to aid the design process or the designer. The present thesis examines the use of task analysis in a process control context, and in particular the use of task analysis to specify operator information and display requirements in such systems. The first part of the thesis examines the theoretical aspect of task analysis and presents a review of the methods, issues and concepts relating to task analysis. A review of over 80 methods of task analysis was carried out to form a basis for the development of a task analysis method to specify operator information requirements in industrial process control contexts. Of the methods reviewed Hierarchical Task Analysis was selected to provide such a basis and developed to meet the criteria outlined for such a method of task analysis. The second section outlines the practical application and evolution of the developed task analysis method. Four case studies were used to examine the method in an empirical context. The case studies represent a range of plant contexts and types, both complex and more simple, batch and continuous and high risk and low risk processes. The theoretical and empirical issues are drawn together and a method developed to provide a task analysis technique to specify operator information requirements and to provide the first stages of a tool to aid the design of VDU displays for process control.
Resumo:
Aims - To develop a method that prospectively assesses adherence rates in paediatric patients with acute lymphoblastic leukaemia (ALL) who are receiving the oral thiopurine treatment 6-mercaptopurine (6-MP). Methods - A total of 19 paediatric patients with ALL who were receiving 6-MP therapy were enrolled in this study. A new objective tool (hierarchical cluster analysis of drug metabolite concentrations) was explored as a novel approach to assess non-adherence to oral thiopurines, in combination with other objective measures (the pattern of variability in 6-thioguanine nucleotide erythrocyte concentrations and 6-thiouric acid plasma levels) and the subjective measure of self-reported adherence questionnaire. Results - Parents of five ALL patients (26.3%) reported at least one aspect of non-adherence, with the majority (80%) citing “carelessness at times about taking medication” as the primary reason for non-adherence followed by “forgetting to take the medication” (60%). Of these patients, three (15.8%) were considered non-adherent to medication according to the self-reported adherence questionnaire (scored ≥ 2). Four ALL patients (21.1%) had metabolite profiles indicative of non-adherence (persistently low levels of metabolites and/or metabolite levels clustered variably with time). Out of these four patients, two (50%) admitted non-adherence to therapy. Overall, when both methods were combined, five patients (26.3%) were considered non-adherent to medication, with higher age representing a risk factor for non-adherence (P < 0.05). Conclusions - The present study explored various ways to assess adherence rates to thiopurine medication in ALL patients and highlighted the importance of combining both objective and subjective measures as a better way to assess adherence to oral thiopurines.
Resumo:
We proposed and tested a multilevel model, underpinned by empowerment theory, that examines the processes linking high-performance work systems (HPWS) and performance outcomes at the individual and organizational levels of analyses. Data were obtained from 37 branches of 2 banking institutions in Ghana. Results of hierarchical regression analysis revealed that branch-level HPWS relates to empowerment climate. Additionally, results of hierarchical linear modeling that examined the hypothesized cross-level relationships revealed 3 salient findings. First, experienced HPWS and empowerment climate partially mediate the influence of branch-level HPWS on psychological empowerment. Second, psychological empowerment partially mediates the influence of empowerment climate and experienced HPWS on service performance. Third, service orientation moderates the psychological empowerment-service performance relationship such that the relationship is stronger for those high rather than low in service orientation. Last, ordinary least squares regression results revealed that branch-level HPWS influences branch-level market performance through cross-level and individual-level influences on service performance that emerges at the branch level as aggregated service performance. © 2011 American Psychological Association.
Resumo:
Jaccard has been the choice similarity metric in ecology and forensic psychology for comparison of sites or offences, by species or behaviour. This paper applies a more powerful hierarchical measure - taxonomic similarity (s), recently developed in marine ecology - to the task of behaviourally linking serial crime. Forensic case linkage attempts to identify behaviourally similar offences committed by the same unknown perpetrator (called linked offences). s considers progressively higher-level taxa, such that two sites show some similarity even without shared species. We apply this index by analysing 55 specific offence behaviours classified hierarchically. The behaviours are taken from 16 sexual offences by seven juveniles where each offender committed two or more offences. We demonstrate that both Jaccard and s show linked offences to be significantly more similar than unlinked offences. With up to 20% of the specific behaviours removed in simulations, s is equally or more effective at distinguishing linked offences than where Jaccard uses a full data set. Moreover, s retains significant difference between linked and unlinked pairs, with up to 50% of the specific behaviours removed. As police decision-making often depends upon incomplete data, s has clear advantages and its application may extend to other crime types. Copyright © 2007 John Wiley & Sons, Ltd.
Resumo:
Visualization has proven to be a powerful and widely-applicable tool the analysis and interpretation of data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and sub-clusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach first on a toy data set, and then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multi-phase flows in oil pipelines and to data in 36 dimensions derived from satellite images.
Resumo:
It has been argued that a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex data sets, and therefore a hierarchical visualization system is desirable. In this paper we extend an existing locally linear hierarchical visualization system PhiVis ¸iteBishop98a in several directions: bf(1) We allow for em non-linear projection manifolds. The basic building block is the Generative Topographic Mapping (GTM). bf(2) We introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree. General training equations are derived, regardless of the position of the model in the tree. bf(3) Using tools from differential geometry we derive expressions for local directional curvatures of the projection manifold. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. It enables the user to interactively highlight those data in the ancestor visualization plots which are captured by a child model. We also incorporate into our system a hierarchical, locally selective representation of magnification factors and directional curvatures of the projection manifolds. Such information is important for further refinement of the hierarchical visualization plot, as well as for controlling the amount of regularization imposed on the local models. We demonstrate the principle of the approach on a toy data set and apply our system to two more complex 12- and 18-dimensional data sets.
Resumo:
Cellular mobile radio systems will be of increasing importance in the future. This thesis describes research work concerned with the teletraffic capacity and the canputer control requirements of such systems. The work involves theoretical analysis and experimental investigations using digital computer simulation. New formulas are derived for the congestion in single-cell systems in which there are both land-to-mobile and mobile-to-mobile calls and in which mobile-to-mobile calls go via the base station. Two approaches are used, the first yields modified forms of the familiar Erlang and Engset formulas, while the second gives more complicated but more accurate formulas. The results of computer simulations to establish the accuracy of the formulas are described. New teletraffic formulas are also derived for the congestion in multi -cell systems. Fixed, dynamic and hybrid channel assignments are considered. The formulas agree with previously published simulation results. Simulation programs are described for the evaluation of the speech traffic of mobiles and for the investigation of a possible computer network for the control of the speech traffic. The programs were developed according to the structured progranming approach leading to programs of modular construction. Two simulation methods are used for the speech traffic: the roulette method and the time-true method. The first is economical but has some restriction, while the second is expensive but gives comprehensive answers. The proposed control network operates at three hierarchical levels performing various control functions which include: the setting-up and clearing-down of calls, the hand-over of calls between cells and the address-changing of mobiles travelling between cities. The results demonstrate the feasibility of the control netwvork and indicate that small mini -computers inter-connected via voice grade data channels would be capable of providing satisfactory control
Resumo:
This study considers the application of image analysis in petrography and investigates the possibilities for advancing existing techniques by introducing feature extraction and analysis capabilities of a higher level than those currently employed. The aim is to construct relevant, useful descriptions of crystal form and inter-crystal relations in polycrystalline igneous rock sections. Such descriptions cannot be derived until the `ownership' of boundaries between adjacent crystals has been established: this is the fundamental problem of crystal boundary assignment. An analysis of this problem establishes key image features which reveal boundary ownership; a set of explicit analysis rules is presented. A petrographic image analysis scheme based on these principles is outlined and the implementation of key components of the scheme considered. An algorithm for the extraction and symbolic representation of image structural information is developed. A new multiscale analysis algorithm which produces a hierarchical description of the linear and near-linear structure on a contour is presented in detail. Novel techniques for symmetry analysis are developed. The analyses considered contribute both to the solution of the boundary assignment problem and to the construction of geologically useful descriptions of crystal form. The analysis scheme which is developed employs grouping principles such as collinearity, parallelism, symmetry and continuity, so providing a link between this study and more general work in perceptual grouping and intermediate level computer vision. Consequently, the techniques developed in this study may be expected to find wider application beyond the petrographic domain.
Resumo:
Sponsorship fit is frequently mentioned and empirically examined as a success factor of sponsorship. While sponsorship fit has been considered as a determinant of sponsorship success, little knowledge exists about the antecedents of sponsorship fit. In the present paper, individual and firm-level antecedents of sponsorship fit are examined in a single hierarchical linear model. Results show that sponsorship fit is influenced by the perception of benefits, the firm’s regional identification, sincerity, relatedness to the sponsored activity, and its dominance. On a partnership level, results show that contract length contributes to sponsorship fit while contract value is found to be unrelated.
Resumo:
We analyze a Big Data set of geo-tagged tweets for a year (Oct. 2013–Oct. 2014) to understand the regional linguistic variation in the U.S. Prior work on regional linguistic variations usually took a long time to collect data and focused on either rural or urban areas. Geo-tagged Twitter data offers an unprecedented database with rich linguistic representation of fine spatiotemporal resolution and continuity. From the one-year Twitter corpus, we extract lexical characteristics for twitter users by summarizing the frequencies of a set of lexical alternations that each user has used. We spatially aggregate and smooth each lexical characteristic to derive county-based linguistic variables, from which orthogonal dimensions are extracted using the principal component analysis (PCA). Finally a regionalization method is used to discover hierarchical dialect regions using the PCA components. The regionalization results reveal interesting linguistic regional variations in the U.S. The discovered regions not only confirm past research findings in the literature but also provide new insights and a more detailed understanding of very recent linguistic patterns in the U.S.
Resumo:
In Statnote 9, we described a one-way analysis of variance (ANOVA) ‘random effects’ model in which the objective was to estimate the degree of variation of a particular measurement and to compare different sources of variation in space and time. The illustrative scenario involved the role of computer keyboards in a University communal computer laboratory as a possible source of microbial contamination of the hands. The study estimated the aerobic colony count of ten selected keyboards with samples taken from two keys per keyboard determined at 9am and 5pm. This type of design is often referred to as a ‘nested’ or ‘hierarchical’ design and the ANOVA estimated the degree of variation: (1) between keyboards, (2) between keys within a keyboard, and (3) between sample times within a key. An alternative to this design is a 'fixed effects' model in which the objective is not to measure sources of variation per se but to estimate differences between specific groups or treatments, which are regarded as 'fixed' or discrete effects. This statnote describes two scenarios utilizing this type of analysis: (1) measuring the degree of bacterial contamination on 2p coins collected from three types of business property, viz., a butcher’s shop, a sandwich shop, and a newsagent and (2) the effectiveness of drugs in the treatment of a fungal eye infection.
Resumo:
Objective: Recently, much research has been proposed using nature inspired algorithms to perform complex machine learning tasks. Ant colony optimization (ACO) is one such algorithm based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper investigates ant-based algorithms for gene expression data clustering and associative classification. Methods and material: An ant-based clustering (Ant-C) and an ant-based association rule mining (Ant-ARM) algorithms are proposed for gene expression data analysis. The proposed algorithms make use of the natural behavior of ants such as cooperation and adaptation to allow for a flexible robust search for a good candidate solution. Results: Ant-C has been tested on the three datasets selected from the Stanford Genomic Resource Database and achieved relatively high accuracy compared to other classical clustering methods. Ant-ARM has been tested on the acute lymphoblastic leukemia (ALL)/acute myeloid leukemia (AML) dataset and generated about 30 classification rules with high accuracy. Conclusions: Ant-C can generate optimal number of clusters without incorporating any other algorithms such as K-means or agglomerative hierarchical clustering. For associative classification, while a few of the well-known algorithms such as Apriori, FP-growth and Magnum Opus are unable to mine any association rules from the ALL/AML dataset within a reasonable period of time, Ant-ARM is able to extract associative classification rules.
Resumo:
Web document cluster analysis plays an important role in information retrieval by organizing large amounts of documents into a small number of meaningful clusters. Traditional web document clustering is based on the Vector Space Model (VSM), which takes into account only two-level (document and term) knowledge granularity but ignores the bridging paragraph granularity. However, this two-level granularity may lead to unsatisfactory clustering results with “false correlation”. In order to deal with the problem, a Hierarchical Representation Model with Multi-granularity (HRMM), which consists of five-layer representation of data and a twophase clustering process is proposed based on granular computing and article structure theory. To deal with the zero-valued similarity problemresulted from the sparse term-paragraphmatrix, an ontology based strategy and a tolerance-rough-set based strategy are introduced into HRMM. By using granular computing, structural knowledge hidden in documents can be more efficiently and effectively captured in HRMM and thus web document clusters with higher quality can be generated. Extensive experiments show that HRMM, HRMM with tolerancerough-set strategy, and HRMM with ontology all outperform VSM and a representative non VSM-based algorithm, WFP, significantly in terms of the F-Score.