531 resultados para Value Stream Mapping
Resumo:
Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.
Resumo:
This thesis is a morphological study of the settlement patterns of the diverse hill groups in Chittagong Hill Tracts – a mountainous borderland of Bangladesh in South Asia. It examines the settlement morphology of a hill town, using a combination of both quantitative and qualitative methods, and explains the recurrent neighbourhood types of the highland groups in relation to their urbanisation. The research findings related to the settlements of diverse cultural groups in a cross-border region of the Asian uplands are also relevant to similar contexts and enquiries. Furthermore, the developed methodological framework that facilitated the data collection process in CHT's culturally diverse regions is also applicable to the investigation of geographic areas with similar socio-cultural complexities. Finally, this research specifically contributes to the literature of cross-cultural studies of highland towns and vernacular settlements in the Asian context.
Resumo:
The functions of the volunteer functions inventory were combined with the constructs of the theory of planned behaviour (i.e., attitudes, subjective norms, and perceived behavioural control) to establish whether a stronger, single explanatory model prevailed. Undertaken in the context of episodic, skilled volunteering by individuals who were retired or approaching retirement (N = 186), the research advances on prior studies which either examined the predictive capacity of each model independently or compared their explanatory value. Using hierarchical regression analysis, the functions of the volunteer functions inventory (when controlling for demographic variables) explained an additional 7.0% of variability in individuals’ willingness to volunteer over and above that accounted for by the theory of planned behaviour. Significant predictors in the final model included attitudes, subjective norms and perceived behavioural control from the theory of planned behaviour and the understanding function from the volunteer functions inventory. It is proposed that the items comprising the understanding function may represent a deeper psychological construct (e.g., self-actualisation) not accounted for by the theory of planned behaviour. The findings highlight the potential benefit of combining these two prominent models in terms of improving understanding of volunteerism and providing a single parsimonious model for raising rates of this important behaviour.
Resumo:
The impact of simulation methods for social research in the Information Systems (IS) research field remains low. A concern is our field is inadequately leveraging the unique strengths of simulation methods. Although this low impact is frequently attributed to methodological complexity, we offer an alternative explanation – the poor construction of research value. We argue a more intuitive value construction, better connected to the knowledge base, will facilitate increased value and broader appreciation. Meta-analysis of studies published in IS journals over the last decade evidences the low impact. To facilitate value construction, we synthesize four common types of simulation research contribution: Analyzer, Tester, Descriptor, and Theorizer. To illustrate, we employ the proposed typology to describe how each type of value is structured in simulation research and connect each type to instances from IS literature, thereby making these value types and their construction visible and readily accessible to the general IS community.
Resumo:
Information Technology (IT) value is amongst the most important concepts in the Information Systems (IS) research field. Yet, a clear, well-formulated conceptualization of IT value, cumulatively built upon, is lacking. Drawing from the Facet Theory literature, this paper broaches several meta-theoretical considerations addressing an “ideal” conceptualization of IT value. We argue these considerations may shed light on the advancement of IT value conceptualization methodology.
Resumo:
The shift in the last twenty years from an industrialised economy to a knowledge economy demands new modes of education in which individuals can effectively acquire 21st century competencies. This article builds on the findings and recommendations of a Knowledge Economy Market Development Mapping Study (KEMDMS), conducted in Queensland, Australia. The study was conducted to identify the value of design education programs from primary school through to the professional development level. This article considers the ability of design education as a framework to deliver on the 21st century competences required for the three defining features of the creative knowledge economy – Innovation, Transdisciplinarity and Networks. This is achieved by contextualising key findings from the KEMDMS, including current design education initiatives, and outlining the current and future challenges faced. From this, this article focuses on the role of the tertiary education sector as the central actor in the creative economy in the development of generic design/design education capabilities. Through the unpacking of the study's three key observation themes for change, a holistic design education framework is proposed, and further research directions are discussed.
Resumo:
While social media research has provided detailed cumulative analyses of selected social media platforms and content, especially Twitter, newer platforms, apps, and visual content have been less extensively studied so far. This paper proposes a methodology for studying Instagram activity, building on established methods for Twitter research by initially examining hashtags, as common structural features to both platforms. In doing so, we outline methodological challenges to studying Instagram, especially in comparison to Twitter. Finally, we address critical questions around ethics and privacy for social media users and researchers alike, setting out key considerations for future social media research.
Resumo:
Project work can involve multiple people from varying disciplines coming together to solve problems as a group. Large scale interactive displays are presenting new opportunities to support such interactions with interactive and semantically enabled cooperative work tools such as intelligent mind maps. In this paper, we present a novel digital, touch-enabled mind-mapping tool as a first step towards achieving such a vision. This first prototype allows an evaluation of the benefits of a digital environment for a task that would otherwise be performed on paper or flat interactive surfaces. Observations and surveys of 12 participants in 3 groups allowed the formulation of several recommendations for further research into: new methods for capturing text input on touch screens; inclusion of complex structures; multi-user environments and how users make the shift from single- user applications; and how best to navigate large screen real estate in a touch-enabled, co-present multi-user setting.
Resumo:
In Australia while "appropriate provision for sleep and rest" in early education and care settings is legislated there is no research base to define appropriate practice. This study provided the first, comprehensive documentation of sleep practices in early education and care and assessed their impacts on child health and well-being. The evidence supports development of practice guidelines to manage the complex individual and organisational factors associated with provisions for sleep and rest. The thesis contributes to significant international debate in sleep science regarding the benefits of promoting day-sleep during a period characterized by decline in biological propensity to nap.
Resumo:
This paper explores the concept that individual dancers leave traces in a choreographer’s body of work and similarly, that dancers carry forward residue of embodied choreographies into other working processes. This presentation will be grounded in a study of the multiple iterations of a programme of solo works commissioned in 2008 from choreographers John Jasperse, Jodi Melnick, Liz Roche and Rosemary Butcher and danced by the author. This includes an exploration of the development by John Jasperse of themes from his solo into the pieces PURE (2008) and Truth, Revised Histories, Wishful Thinking and Flat Out Lies (2009); an adaptation of the solo Business of the Bloom by Jodi Melnick in 2008 and a further adaptation of Business of the Bloom by this author in 2012. It will map some of the developments that occurred through a number of further performances over five years of the solo Shared Material on Dying by Liz Roche and the working process of the (uncompleted) solo Episodes of Flight by Rosemary Butcher. The purpose is to reflect back on authorship in dance, an art form in which lineages of influence can often be clearly observed. Normally, once a choreographic work is created and performed, it is archived through video recording, notation and/or reviews. The dancer is no longer called upon to represent the dance piece within the archive and thus her/his lived presence and experiential perspective disappears. The author will draw on the different traces still inhabiting her body as pathways towards understanding how choreographic movement circulates beyond this moment of performance. This will include the interrogation of ownership of choreographic movement, as once it becomes integrated in the body of the dancer, who owns the dance? Furthermore, certain dancers, through their individual physical characteristics and moving identities, can deeply influence the formation of choreographic signatures, a proposition that challenges the sole authorship role of the choreographer in dance production. This paper will be delivered in a presentation format that will bleed into movement demonstrations alongside video footage of the works and auto-ethnographic accounts of dancing experience. A further source of knowledge will be drawn from extracts of interviews with other dancers including Sara Rudner, Rebecca Hilton and Catherine Bennett.
Resumo:
We identified, mapped, and characterized a widespread area (gt;1,020 km2) of patterned ground in the Saginaw Lowlands of Michigan, a wet, flat plain composed of waterlain tills, lacustrine deposits, or both. The polygonal patterned ground is interpreted as a possible relict permafrost feature, formed in the Late Wisconsin when this area was proximal to the Laurentide ice sheet. Cold-air drainage off the ice sheet might have pooled in the Saginaw Lowlands, which sloped toward the ice margin, possibly creating widespread but short-lived permafrost on this glacial lake plain. The majority of the polygons occur between the Glacial Lake Warren strandline (~14.8 cal. ka) and the shoreline of Glacial Lake Elkton (~14.3 cal. ka), providing a relative age bracket for the patterned ground. Most of the polygons formed in dense, wet, silt loam soils on flat-lying sites and take the form of reticulate nets with polygon long axes of 150 to 160 m and short axes of 60 to 90 m. Interpolygon swales, often shown as dark curvilinears on aerial photographs, are typically slightly lower than are the polygon centers they bound. Some portions of these interpolygon swales are infilled with gravel-free, sandy loam sediments. The subtle morphology and sedimentological characteristics of the patterned ground in the Saginaw Lowlands suggest that thermokarst erosion, rather than ice-wedge replacement, was the dominant geomorphic process associated with the degradation of the Late-Wisconsin permafrost in the study area and, therefore, was primarily responsible for the soil patterns seen there today.
Resumo:
A precise representation of the spatial distribution of hydrophobicity, hydrophilicity and charges on the molecular surface of proteins is critical for the understanding of the interaction with small molecules and larger systems. The representation of hydrophobicity is rarely done at atom-level, as this property is generally assigned to residues. A new methodology for the derivation of atomic hydrophobicity from any amino acid-based hydrophobicity scale was used to derive 8 sets of atomic hydrophobicities, one of which was used to generate the molecular surfaces for 35 proteins with convex structures, 5 of which, i.e., lysozyme, ribonuclease, hemoglobin, albumin and IgG, have been analyzed in more detail. Sets of the molecular surfaces of the model proteins have been constructed using spherical probes with increasingly large radii, from 1.4 to 20 A˚, followed by the quantification of (i) the surface hydrophobicity; (ii) their respective molecular surface areas, i.e., total, hydrophilic and hydrophobic area; and (iii) their relative densities, i.e., divided by the total molecular area; or specific densities, i.e., divided by property-specific area. Compared with the amino acid-based formalism, the atom-level description reveals molecular surfaces which (i) present an approximately two times more hydrophilic areas; with (ii) less extended, but between 2 to 5 times more intense hydrophilic patches; and (iii) 3 to 20 times more extended hydrophobic areas. The hydrophobic areas are also approximately 2 times more hydrophobicity-intense. This, more pronounced "leopard skin"-like, design of the protein molecular surface has been confirmed by comparing the results for a restricted set of homologous proteins, i.e., hemoglobins diverging by only one residue (Trp37). These results suggest that the representation of hydrophobicity on the protein molecular surfaces at atom-level resolution, coupled with the probing of the molecular surface at different geometric resolutions, can capture processes that are otherwise obscured to the amino acid-based formalism.
Resumo:
Diagnosis of articular cartilage pathology in the early disease stages using current clinical diagnostic imaging modalities is challenging, particularly because there is often no visible change in the tissue surface and matrix content, such as proteoglycans (PG). In this study, we propose the use of near infrared (NIR) spectroscopy to spatially map PG content in articular cartilage. The relationship between NIR spectra and reference data (PG content) obtained from histology of normal and artificially induced PG-depleted cartilage samples was investigated using principal component (PC) and partial least squares (PLS) regression analyses. Significant correlation was obtained between both data (R2 = 91.40%, p<0.0001). The resulting correlation was used to predict PG content from spectra acquired from whole joint sample, this was then employed to spatially map this component of cartilage across the intact sample. We conclude that NIR spectroscopy is a feasible tool for evaluating cartilage contents and mapping their distribution across mammalian joint