905 resultados para task model
Resumo:
Background It is often believed that by ensuring the ongoing completion of competency documents and life-long learning in nursing practice guarantees quality patient care. This is probably true in most cases where it provides reassurances that the nursing team is maintaining a safe “generalised” level of practice. However, competency does not always promise quality performance. There are a number of studies that have reported differences in what practitioners know and what they actually do despite being deemed competent. Aim The aim of this study was to assess whether our current competency documentation is fit for purpose and to ascertain whether performance assessment needs to be a key component in determining competence. Method 15 nurses within a General ICU who had been on the unit <4 years agreed to participate in this project. Using participant observation and assessing performance against key indicators of the Benner Novice to Expert5 model the participants were supported and assessed over the course of a ‘normal’ nursing shift. Results The results were surprising both positively and negatively. First, the nurses felt more empowered in their clinical decision making skills; second, it identified individual learning needs and milestones in educational development. There were some key challenges identified which included 5 nurses over estimating their level of competence, practice was still very much focused on task acquisition and skill and surprisingly some nurses still felt dominated by the other health professionals within the unit. Conclusion We found that the capacity and capabilities of our nursing workforce needs continual ongoing support especially if we want to move our staff from capable task-doer to competent performers. Using the key novice to expert indicators identified the way forward for us in how we assess performance and competence in practice particularly where promotion to higher grades is based on existing documentation.
Resumo:
The point of departure in this dissertation was the practical safety problem of unanticipated, unfamiliar events and unexpected changes in the environment, the demanding situations which the operators should take care of in the complex socio-technical systems. The aim of this thesis was to increase the understanding of demanding situations and of the resources for coping with these situations by presenting a new construct, a conceptual model called Expert Identity (ExId) as a way to open up new solutions to the problem of demanding situations and by testing the model in empirical studies on operator work. The premises of the Core-Task Analysis (CTA) framework were adopted as a starting point: core-task oriented working practices promote the system efficiency (incl. safety, productivity and well-being targets) and that should be supported. The negative effects of stress were summarised and the possible countermeasures related to the operators' personal resources such as experience, expertise, sense of control, conceptions of work and self etc. were considered. ExId was proposed as a way to bring emotional-energetic depth into the work analysis and to supplement CTA-based practical methods to discover development challenges and to contribute to the development of complex socio-technical systems. The potential of ExId to promote understanding of operator work was demonstrated in the context of the six empirical studies on operator work. Each of these studies had its own practical objectives within the corresponding quite broad focuses of the studies. The concluding research questions were: 1) Are the assumptions made in ExId on the basis of the different theories and previous studies supported by the empirical findings? 2) Does the ExId construct promote understanding of the operator work in empirical studies? 3) What are the strengths and weaknesses of the ExId construct? The layers and the assumptions of the development of expert identity appeared to gain evidence. The new conceptual model worked as a part of an analysis of different kinds of data, as a part of different methods used for different purposes, in different work contexts. The results showed that the operators had problems in taking care of the core task resulting from the discrepancy between the demands and resources (either personal or external). The changes of work, the difficulties in reaching the real content of work in the organisation and the limits of the practical means of support had complicated the problem and limited the possibilities of the development actions within the case organisations. Personal resources seemed to be sensitive to the changes, adaptation is taking place, but not deeply or quickly enough. Furthermore, the results showed several characteristics of the studied contexts that complicated the operators' possibilities to grow into or with the demands and to develop practices, expertise and expert identity matching the core task. They were: discontinuation of the work demands, discrepancy between conceptions of work held in the other parts of organisation, visions and the reality faced by the operators, emphasis on the individual efforts and situational solutions. The potential of ExId to open up new paths to solving the problem of the demanding situations and its ability to enable studies on practices in the field was considered in the discussion. The results were interpreted as promising enough to encourage the conduction of further studies on ExId. This dissertation proposes especially contribution to supporting the workers in recognising the changing demands and their possibilities for growing with them when aiming to support human performance in complex socio-technical systems, both in designing the systems and solving the existing problems.
Resumo:
The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.
Resumo:
This research studied distributed computing of all-to-all comparison problems with big data sets. The thesis formalised the problem, and developed a high-performance and scalable computing framework with a programming model, data distribution strategies and task scheduling policies to solve the problem. The study considered storage usage, data locality and load balancing for performance improvement in solving the problem. The research outcomes can be applied in bioinformatics, biometrics and data mining and other domains in which all-to-all comparisons are a typical computing pattern.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
Advancements in the analysis techniques have led to a rapid accumulation of biological data in databases. Such data often are in the form of sequences of observations, examples including DNA sequences and amino acid sequences of proteins. The scale and quality of the data give promises of answering various biologically relevant questions in more detail than what has been possible before. For example, one may wish to identify areas in an amino acid sequence, which are important for the function of the corresponding protein, or investigate how characteristics on the level of DNA sequence affect the adaptation of a bacterial species to its environment. Many of the interesting questions are intimately associated with the understanding of the evolutionary relationships among the items under consideration. The aim of this work is to develop novel statistical models and computational techniques to meet with the challenge of deriving meaning from the increasing amounts of data. Our main concern is on modeling the evolutionary relationships based on the observed molecular data. We operate within a Bayesian statistical framework, which allows a probabilistic quantification of the uncertainties related to a particular solution. As the basis of our modeling approach we utilize a partition model, which is used to describe the structure of data by appropriately dividing the data items into clusters of related items. Generalizations and modifications of the partition model are developed and applied to various problems. Large-scale data sets provide also a computational challenge. The models used to describe the data must be realistic enough to capture the essential features of the current modeling task but, at the same time, simple enough to make it possible to carry out the inference in practice. The partition model fulfills these two requirements. The problem-specific features can be taken into account by modifying the prior probability distributions of the model parameters. The computational efficiency stems from the ability to integrate out the parameters of the partition model analytically, which enables the use of efficient stochastic search algorithms.
Resumo:
The higher education sector is under ongoing pressure to demonstrate quality and efficacy of educational provision, including graduate outcomes. Preparing students as far as possible for the world of professional work has become one of the central tasks of contemporary universities. This challenging task continues to receive significant attention by policy makers and scholars, in the broader contexts of widespread labour market uncertainty and massification of the higher education system (Tomlinson, 2012). In contrast to the previous era of the university, in which ongoing professional employment was virtually guaranteed to university-qualified individuals, contemporary graduates must now be proactive and flexible. They must adapt to a job market that may not accept them immediately, and has continually shifting requirements (Clarke, 2008). The saying goes that rather than seeking security in employment, graduates must now “seek security in employability”. However, as I will argue in this chapter, the current curricular and pedagogic approaches universities adopt, and indeed the core structural characteristics of university-based education, militate against the development of the capabilities that graduates require now and into the future.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
Automated synthesis of mechanical designs is an important step towards the development of an intelligent CAD system. Research into methods for supporting conceptual design using automated synthesis has attracted much attention in the past decades. The research work presented here is based on the processes of synthesizing multiple state mechanical devices carried out individually by ten engineering designers. The designers are asked to think aloud, while carrying out the synthesis. The ten design synthesis processes are video recorded, and the records are transcribed and coded for identifying activities occurring in the synthesis processes, as well as for identifying the inputs to and outputs from the activities. A mathematical representation for specifying multi-state design task is proposed. Further, a descriptive model capturing all the ten synthesis processes is developed and presented in this paper. This will be used to identify the outstanding issues to be resolved before a system for supporting design synthesis of multiple state mechanical devices that is capable of creating a comprehensive variety of solution alternatives could be developed.
Resumo:
The primary objective of the paper is to make use of statistical digital human model to better understand the nature of reach probability of points in the taskspace. The concept of task-dependent boundary manikin is introduced to geometrically characterize the extreme individuals in the given population who would accomplish the task. For a given point of interest and task, the map of the acceptable variation in anthropometric parameters is superimposed with the distribution of the same parameters in the given population to identify the extreme individuals. To illustrate the concept, the task space mapping is done for the reach probability of human arms. Unlike the boundary manikins, who are completely defined by the population, the dimensions of these manikins will vary with task, say, a point to be reached, as in the present case. Hence they are referred to here as the task-dependent boundary manikins. Simulations with these manikins would help designers to visualize how differently the extreme individuals would perform the task. Reach probability at the points in a 3D grid in the operational space is computed; for objects overlaid in this grid, approximate probabilities are derived from the grid for rendering them with colors indicating the reach probability. The method may also help in providing a rational basis for selection of personnel for a given task.
Resumo:
This research is designed to develop a new technique for site characterization in a three-dimensional domain. Site characterization is a fundamental task in geotechnical engineering practice, as well as a very challenging process, with the ultimate goal of estimating soil properties based on limited tests at any half-space subsurface point in a site.In this research, the sandy site at the Texas A&M University's National Geotechnical Experimentation Site is selected as an example to develop the new technique for site characterization, which is based on Artificial Neural Networks (ANN) technology. In this study, a sequential approach is used to demonstrate the applicability of ANN to site characterization. To verify its robustness, the proposed new technique is compared with other commonly used approaches for site characterization. In addition, an artificial site is created, wherein soil property values at any half-space point are assumed, and thus the predicted values can compare directly with their corresponding actual values, as a means of validation. Since the three-dimensional model has the capability of estimating the soil property at any location in a site, it could have many potential applications, especially in such case, wherein the soil properties within a zone are of interest rather than at a single point. Examples of soil properties of zonal interest include soil type classification and liquefaction potential evaluation. In this regard, the present study also addresses this type of applications based on a site located in Taiwan, which experienced liquefaction during the 1999 Chi-Chi, Taiwan, Earthquake.
Resumo:
Designing and optimizing high performance microprocessors is an increasingly difficult task due to the size and complexity of the processor design space, high cost of detailed simulation and several constraints that a processor design must satisfy. In this paper, we propose the use of empirical non-linear modeling techniques to assist processor architects in making design decisions and resolving complex trade-offs. We propose a procedure for building accurate non-linear models that consists of the following steps: (i) selection of a small set of representative design points spread across processor design space using latin hypercube sampling, (ii) obtaining performance measures at the selected design points using detailed simulation, (iii) building non-linear models for performance using the function approximation capabilities of radial basis function networks, and (iv) validating the models using an independently and randomly generated set of design points. We evaluate our model building procedure by constructing non-linear performance models for programs from the SPEC CPU2000 benchmark suite with a microarchitectural design space that consists of 9 key parameters. Our results show that the models, built using a relatively small number of simulations, achieve high prediction accuracy (only 2.8% error in CPI estimates on average) across a large processor design space. Our models can potentially replace detailed simulation for common tasks such as the analysis of key microarchitectural trends or searches for optimal processor design points.
Resumo:
The problem of semantic interoperability arises while integrating applications in different task domains across the product life cycle. A new shape-function-relationship (SFR) framework is proposed as a taxonomy based on which an ontology is developed. Ontology based on the SFR framework, that captures explicit definition of terminology and knowledge relationships in terms of shape, function and relationship descriptors, offers an attractive approach for solving semantic interoperability issue. Since all instances of terms are based on single taxonomy with a formal classification, mapping of terms requires a simple check on the attributes used in the classification. As a preliminary study, the framework is used to develop ontology of terms used in the aero-engine domain and the ontology is used to resolve the semantic interoperability problem in the integration of design and maintenance. Since the framework allows a single term to have multiple classifications, handling context dependent usage of terms becomes possible. Automating the classification of terms and establishing the completeness of the classification scheme are being addressed presently.
Resumo:
Multi-task learning solves multiple related learning problems simultaneously by sharing some common structure for improved generalization performance of each task. We propose a novel approach to multi-task learning which captures task similarity through a shared basis vector set. The variability across tasks is captured through task specific basis vector set. We use sparse support vector machine (SVM) algorithm to select the basis vector sets for the tasks. The approach results in a sparse model where the prediction is done using very few examples. The effectiveness of our approach is demonstrated through experiments on synthetic and real multi-task datasets.