469 resultados para Model information
Resumo:
National flag carriers are struggling for survival, not only due to classical reasons such as increase in fuel and tax or natural disasters, but largely due to the inability to quickly adapt to its competitive environment – the emergence of budget and Persian Gulf airlines. In this research, we investigate how airlines can transform their business models via technological and strategic capabilities to become profitable and sustainable passenger experience companies. To formulate recommendations, we analyze customer sentiments via social media to understand what people are saying about the airlines.
Resumo:
We examined the variation in association between high temperatures and elderly mortality (age ≥ 75 years) from year to year in 83 US cities between 1987 and 2000. We used a Poisson regression model and decomposed the mortality risk for high temperatures into: a “main effect” due to high temperatures using lagged non-linear function, and an “added effect” due to consecutive high temperature days. We pooled yearly effects across both regional and national levels. The high temperature effects (both main and added effects) on elderly mortality varied greatly from year to year. In every city there was at least one year where higher temperatures were associated with lower mortality. Years with relatively high heat-related mortality were often followed by years with relatively low mortality. These year to year changes have important consequences for heat-warning systems and for predictions of heat-related mortality due to climate change.
Resumo:
Process modeling grammars are used to create models of business processes. In this paper, we discuss how different routing symbol designs affect an individual's ability to comprehend process models. We conduct an experiment with 154 students to ascertain which visual design principles influence process model comprehension. Our findings suggest that design principles related to perceptual discriminability and pop out improve comprehension accuracy. Furthermore, semantic transparency and aesthetic design of symbols lower the perceived difficulty of comprehension. Our results inform important principles about notational design of process modeling grammars and the effective use of process modeling in practice.
Resumo:
In this article, we report on the findings of an exploratory study into the experience of undergraduate students as they learn new mathematical models. Qualitative and quanti- tative data based around the students’ approaches to learning new mathematical models were collected. The data revealed that students actively adopt three approaches to under- standing a new mathematical model: gathering information for the task of understanding the model, practising with and using the model, and finding interrelationships between elements of the model. We found that the students appreciate mathematical models that have a real world application and that this can be used to engage students in higher level learning approaches.
Resumo:
Optimal Asset Maintenance decisions are imperative for efficient asset management. Decision Support Systems are often used to help asset managers make maintenance decisions, but high quality decision support must be based on sound decision-making principles. For long-lived assets, a successful Asset Maintenance decision-making process must effectively handle multiple time scales. For example, high-level strategic plans are normally made for periods of years, while daily operational decisions may need to be made within a space of mere minutes. When making strategic decisions, one usually has the luxury of time to explore alternatives, whereas routine operational decisions must often be made with no time for contemplation. In this paper, we present an innovative, flexible decision-making process model which distinguishes meta-level decision making, i.e., deciding how to make decisions, from the information gathering and analysis steps required to make the decisions themselves. The new model can accommodate various decision types. Three industrial case studies are given to demonstrate its applicability.
Resumo:
As business process management technology matures, organisations acquire more and more business process models. The management of the resulting collections of process models poses real challenges. One of these challenges concerns model retrieval where support should be provided for the formulation and efficient execution of business process model queries. As queries based on only structural information cannot deal with all querying requirements in practice, there should be support for queries that require knowledge of process model semantics. In this paper we formally define a process model query language that is based on semantic relationships between tasks in process models and is independent of any particular process modelling notation.
Resumo:
AIMS: To test a model that delineates advanced practice nursing from the practice profile of other nursing roles and titles. BACKGROUND: There is extensive literature on advanced practice reporting the importance of this level of nursing to contemporary health service and patient outcomes. Literature also reports confusion and ambiguity associated with advanced practice nursing. Several countries have regulation and delineation for the nurse practitioner, but there is less clarity in definition and service focus of other advanced practice nursing roles. DESIGN: A statewide survey. METHODS: Using the modified Strong Model of Advanced Practice Role Delineation tool, a survey was conducted in 2009 with a random sample of registered nurses/midwives from government facilities in Queensland, Australia. Analysis of variance compared total and subscale scores across groups according to grade. Linear, stepwise multiple regression analysis examined factors influencing advanced practice nursing activities across all domains. RESULTS: There were important differences according to grade in mean scores for total activities in all domains of advanced practice nursing. Nurses working in advanced practice roles (excluding nurse practitioners) performed more activities across most advanced practice domains. Regression analysis indicated that working in clinical advanced practice nursing roles with higher levels of education were strong predictors of advanced practice activities overall. CONCLUSION: Essential and appropriate use of advanced practice nurses requires clarity in defining roles and practice levels. This research delineated nursing work according to grade and level of practice, further validating the tool for the Queensland context and providing operational information for assigning innovative nursing service.
Resumo:
The rapid increase in the deployment of CCTV systems has led to a greater demand for algorithms that are able to process incoming video feeds. These algorithms are designed to extract information of interest for human operators. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned `normal' model. Many researchers have tried various sets of features to train different learning models to detect abnormal behaviour in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM) to model the normal activities of people. The outliers of the model with insufficient likelihood are identified as abnormal activities. Our Semi-2D HMM is designed to model both the temporal and spatial causalities of the crowd behaviour by assuming the current state of the Hidden Markov Model depends not only on the previous state in the temporal direction, but also on the previous states of the adjacent spatial locations. Two different HMMs are trained to model both the vertical and horizontal spatial causal information. Location features, flow features and optical flow textures are used as the features for the model. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
Self-efficacy has two cognitive components, efficacy expectations and outcome expectations, and their influence on behavior change is synergistic. Efficacy expectation is effected by four main sources of information provided by direct and indirect experiences. The four sources of information are performance accomplishments, vicarious experience, verbal persuasion and self-appraisal. How to measure and develop interventions is an important issue at present. This article clearly analyzes the relationship between variables of the self-efficacy model and explains the implementation of self-efficacy enhancing interventions and instruments in order to test the model. Through the process of the use of theory and feasibility in clinical practice, it is expected that professional medical care personnel should firstly familiarize themselves with the self-efficiency model and concept, and then flexibly promote it in professional fields clinical practice, chronic disease care and health promotion.
Resumo:
What are the information practices of teen content creators? In the United States over two thirds of teens have participated in creating and sharing content in online communities that are developed for the purpose of allowing users to be producers of content. This study investigates how teens participating in digital participatory communities find and use information as well as how they experience the information. From this investigation emerged a model of their information practices while creating and sharing content such as film-making, visual art work, story telling, music, programming, and web site design in digital participatory communities. The research uses grounded theory methodology in a social constructionist framework to investigate the research problem: what are the information practices of teen content creators? Data was gathered through semi-structured interviews and observation of teen’s digital communities. Analysis occurred concurrently with data collection, and the principle of constant comparison was applied in analysis. As findings were constructed from the data, additional data was collected until a substantive theory was constructed and no new information emerged from data collection. The theory that was constructed from the data describes five information practices of teen content creators. The five information practices are learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. In describing the five information practices there are three necessary descriptive components, the community of practice, the experiences of information and the information actions. The experiences of information include information as participation, inspiration, collaboration, process, and artifact. Information actions include activities that occur in the categories of gathering, thinking and creating. The experiences of information and information actions intersect in the information practices, which are situated within the specific community of practice, such as a digital participatory community. Finally, the information practices interact and build upon one another and this is represented in a graphic model and explanation.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Entity-oriented search has become an essential component of modern search engines. It focuses on retrieving a list of entities or information about the specific entities instead of documents. In this paper, we study the problem of finding entity related information, referred to as attribute-value pairs, that play a significant role in searching target entities. We propose a novel decomposition framework combining reduced relations and the discriminative model, Conditional Random Field (CRF), for automatically finding entity-related attribute-value pairs from free text documents. This decomposition framework allows us to locate potential text fragments and identify the hidden semantics, in the form of attribute-value pairs for user queries. Empirical analysis shows that the decomposition framework outperforms pattern-based approaches due to its capability of effective integration of syntactic and semantic features.
Resumo:
Chatrooms, for example Internet Relay Chat, are generally multi-user, multi-channel and multiserver chat-systems which run over the Internet and provide a protocol for real-time text-based conferencing between users all over the world. While a well-trained human observer is able to understand who is chatting with whom, there are no efficient and accurate automated tools to determine the groups of users conversing with each other. A precursor to analysing evolving cyber-social phenomena is to first determine what the conversations are and which groups of chatters are involved in each conversation. We consider this problem in this paper. We propose an algorithm to discover all groups of users that are engaged in conversation. Our algorithms are based on a statistical model of a chatroom that is founded on our experience with real chatrooms. Our approach does not require any semantic analysis of the conversations, rather it is based purely on the statistical information contained in the sequence of posts. We improve the accuracy by applying some graph algorithms to clean the statistical information. We present some experimental results which indicate that one can automatically determine the conversing groups in a chatroom, purely on the basis of statistical analysis.
Resumo:
Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.
Resumo:
In recent years, some models have been proposed for the fault section estimation and state identification of unobserved protective relays (FSE-SIUPR) under the condition of incomplete state information of protective relays. In these models, the temporal alarm information from a faulted power system is not well explored although it is very helpful in compensating the incomplete state information of protective relays, quickly achieving definite fault diagnosis results and evaluating the operating status of protective relays and circuit breakers in complicated fault scenarios. In order to solve this problem, an integrated optimization mathematical model for the FSE-SIUPR, which takes full advantage of the temporal characteristics of alarm messages, is developed in the framework of the well-established temporal constraint network. With this model, the fault evolution procedure can be explained and some states of unobserved protective relays identified. The model is then solved by means of the Tabu search (TS) and finally verified by test results of fault scenarios in a practical power system.