248 resultados para Kähler-Einstein Metrics
Resumo:
The purpose of this article is to show the applicability and benefits of the techniques of design of experiments as an optimization tool for discrete simulation models. The simulated systems are computational representations of real-life systems; its characteristics include a constant evolution that follows the occurrence of discrete events along the time. In this study, a production system, designed with the business philosophy JIT (Just in Time) is used, which seeks to achieve excellence in organizations through waste reduction in all the operational aspects. The most typical tool of JIT systems is the KANBAN production control that seeks to synchronize demand with flow of materials, minimize work in process, and define production metrics. Using experimental design techniques for stochastic optimization, the impact of the operational factors on the efficiency of the KANBAN / CONWIP simulation model is analyzed. The results show the effectiveness of the integration of experimental design techniques and discrete simulation models in the calculation of the operational parameters. Furthermore, the reliability of the methodologies found was improved with a new statistical consideration.
Resumo:
This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).
Resumo:
The aim of this research was to develop a set of reliable, valid preparedness metrics, built around a comprehensive framework for assessing hospital preparedness. This research used a combination of qualitative and quantitative methods which included interview and a Delphi study as well as a survey of hospitals in the Sichuan Province of China. The resultant framework is constructed around the stages of disaster management and includes nine key elements. Factor Analysis identified four contributing factors. The comparison of hospitals' preparedness using these four factors, revealed that tertiary-grade, teaching and general hospitals performed better than secondary-grade, non-teaching and non-general hospitals.
Resumo:
This report summarises the findings of a case study on Queensland’s New Generation Rollingstock (NGR) Project carried out as part of SBEnrc Project 2.34 Driving Whole-of-life Efficiencies through BIM and Procurement. This case study is one of three exemplar projects studied in order to leverage academic research in defining indicators for measuring tangible and intangible benefits of Building Information Modelling (BIM) across a project’s life-cycle in infrastructure and buildings. The NGR is an AUD 4.4 billion project carried out under an Availability Payment Public-Private Partnership (PPP) between the Queensland Government and the Bomabardier-led QTECTIC consortium comprising Bombardier Transportation, John Laing, ITOCHU Corporation and Aberdeen Infrastructure Investments. BIM has been deployed on the project from conceptual stages to drive both design and the currently ongoing construction at the Wulkuraka Project Site. This case study sourced information from a series of semi-structured interviews covering a cross-section of key stakeholders on the project. The present research identified 25 benefits gained from implementing BIM processes and tools. Some of the most prominent benefits were those leading to improved outcomes and higher customer satisfaction such as improved communications, data and information management, and coordination. There were also a number of expected benefits for future phases such as: • Improved decision making through the use of BIM for managing assets • Improved models through BIM maturity • Better utilisation of BIM for procurement on similar future projects • New capacity to specify the content of BIM models within contracts There were also three benefits that were expected to have been achieved but were not realised on the NGR project. These were higher construction information quality levels, better alignment in design teams as well as project teams, and capability improvements in measuring the impact of BIM on construction safety. This report includes individual profiles describing each benefit as well as the tools and processes that enabled them. Four key BIM metrics were found to be currently in use and six more were identified as potential metrics for the future. This case study also provides insights into challenges associated with implementing BIM on a project of the size and complexity of the NGR. Procurement aspects and lessons learned for managers are also highlighted, including a list of recommendations for developing a framework to assess the benefits of BIM across the project life-cycle.
Resumo:
This paper presents the results of an investigation into contextual differences in the development and delivery of enterprise education in higher education globally. Using information gathered from an online survey distributed to enterprise educators, distinct differences in the provision of enterprise education are identified, as are differences of opinion among enterprise educators. The findings demonstrate that although enterprise education is highly diversified in terms of presentation, content and style, there are clear commonalities with regard to expected student outcomes. The respondents reported low levels of business start-up activity among students during enterprise education and/or within one year of graduation. Over 75% of the educators surveyed had personal start-up experience, and there was limited reliance on academic literature, with a preference for referencing broader stakeholder perspectives. With regard to the practical implications of this research, the international metric of enterprise education appears to be a broad set of enterprising skills that equip and enable students to recognize and exploit opportunities in order to navigate future unknowns. The commonly employed metric of business start-up appears less valid in light of this investigation.
Resumo:
Thickness measurements derived from optical coherence tomography (OCT) images of the eye are a fundamental clinical and research metric, since they provide valuable information regarding the eye’s anatomical and physiological characteristics, and can assist in the diagnosis and monitoring of numerous ocular conditions. Despite the importance of these measurements, limited attention has been given to the methods used to estimate thickness in OCT images of the eye. Most current studies employing OCT use an axial thickness metric, but there is evidence that axial thickness measures may be biased by tilt and curvature of the image. In this paper, standard axial thickness calculations are compared with a variety of alternative metrics for estimating tissue thickness. These methods were tested on a data set of wide-field chorio-retinal OCT scans (field of view (FOV) 60° x 25°) to examine their performance across a wide region of interest and to demonstrate the potential effect of curvature of the posterior segment of the eye on the thickness estimates. Similarly, the effect of image tilt was systematically examined with the same range of proposed metrics. The results demonstrate that image tilt and curvature of the posterior segment can affect axial tissue thickness calculations, while alternative metrics, which are not biased by these effects, should be considered. This study demonstrates the need to consider alternative methods to calculate tissue thickness in order to avoid measurement error due to image tilt and curvature.
Resumo:
The brain's functional network exhibits many features facilitating functional specialization, integration, and robustness to attack. Using graph theory to characterize brain networks, studies demonstrate their small-world, modular, and "rich-club" properties, with deviations reported in many common neuropathological conditions. Here we estimate the heritability of five widely used graph theoretical metrics (mean clustering coefficient (γ), modularity (Q), rich-club coefficient (ϕnorm), global efficiency (λ), small-worldness (σ)) over a range of connection densities (k=5-25%) in a large cohort of twins (N=592, 84 MZ and 89 DZ twin pairs, 246 single twins, age 23±2.5). We also considered the effects of global signal regression (GSR). We found that the graph metrics were moderately influenced by genetic factors h2 (γ=47-59%, Q=38-59%, ϕnorm=0-29%, λ=52-64%, σ=51-59%) at lower connection densities (≤15%), and when global signal regression was implemented, heritability estimates decreased substantially h2 (γ=0-26%, Q=0-28%, ϕnorm=0%, λ=23-30%, σ=0-27%). Distinct network features were phenotypically correlated (|r|=0.15-0.81), and γ, Q, and λ were found to be influenced by overlapping genetic factors. Our findings suggest that these metrics may be potential endophenotypes for psychiatric disease and suitable for genetic association studies, but that genetic effects must be interpreted with respect to methodological choices.
Resumo:
Twitter’s hashtag functionality is now used for a very wide variety of purposes, from covering crises and other breaking news events through gathering an instant community around shared media texts (such as sporting events and TV broadcasts) to signalling emotive states from amusement to despair. These divergent uses of the hashtag are increasingly recognised in the literature, with attention paid especially to the ability for hashtags to facilitate the creation of ad hoc or hashtag publics. A more comprehensive understanding of these different uses of hashtags has yet to be developed, however. Previous research has explored the potential for a systematic analysis of the quantitative metrics that could be generated from processing a series of hashtag datasets. Such research found, for example, that crisis-related hashtags exhibited a significantly larger incidence of retweets and tweets containing URLs than hashtags relating to televised events, and on this basis hypothesised that the information-seeking and -sharing behaviours of Twitter users in such different contexts were substantially divergent. This article updates such study and their methodology by examining the communicative metrics of a considerably larger and more diverse number of hashtag datasets, compiled over the past five years. This provides an opportunity both to confirm earlier findings, as well as to explore whether hashtag use practices may have shifted subsequently as Twitter’s userbase has developed further; it also enables the identification of further hashtag types beyond the “crisis” and “mainstream media event” types outlined to date. The article also explores the presence of such patterns beyond recognised hashtags, by incorporating an analysis of a number of keyword-based datasets. This large-scale, comparative approach contributes towards the establishment of a more comprehensive typology of hashtags and their publics, and the metrics it describes will also be able to be used to classify new hashtags emerging in the future. In turn, this may enable researchers to develop systems for automatically distinguishing newly trending topics into a number of event types, which may be useful for example for the automatic detection of acute crises and other breaking news events.