927 resultados para person-centred systems
Resumo:
This report evaluates the performance of long-term care (LTC) systems in Europe, with a special emphasis on four countries that were selected in Work Package 1 of the ANCIEN project as representative of different LTC systems: Germany, the Netherlands, Spain and Poland. Based on a performance framework, we use the following four core criteria for the evaluation: the quality of life of LTC users, the quality of care, equity of LTC systems and the total burden of LTC (consisting of the financial burden and the burden of informal caregiving). The quality of life is analysed by studying the experience of LTC users in 13 European countries, using data from the Survey of Health, Ageing and Retirement in Europe (SHARE). Older persons with limitations living at home have the highest probability of receiving help (formal or informal) in Germany and the lowest in Poland. Given that help is available, the sufficiency of the help is best ensured in Switzerland, Italy and the Netherlands. The indirectly observed properties of the LTC system are most favourable in France. An older person who considers all three aspects important might be best off living in Belgium or Switzerland. The horizontal and vertical equity of LTC systems are analysed for the four representative countries. The Dutch system scores highest on overall equity, followed by the German system. The Spanish and Polish systems are both less equitable than the Dutch and German systems. To show how ageing may affect the financial burden of LTC, projections until 2060 are given for LTC expenditures for the four representative countries. Under the base scenario, for all four countries the proportions of GDP spent on public and private LTC are projected to more than double between 2010 and 2060, and even treble in some cases. The projections also highlight the large differences in LTC expenditures between the four countries. The Netherlands spends by far the most on LTC. Furthermore, the report presents information for a number of European countries on quality of care, the burden of informal caregiving and other aspects of performance. The LTC systems for the four representative countries are evaluated using the four core criteria. The Dutch system has the highest scores on all four dimensions except the total burden of care, where it has the second-best score after Poland. The German system has somewhat lower scores than the Dutch on all four dimensions. The relatively large role for informal care lowers the equity of the German system. The Polish system excels in having a low total burden of care, but it scores lowest on quality of care and equity. The Spanish system has few extreme scores. Policy implications are discussed in the last chapter of this report and in the Policy Brief based on this report.
Resumo:
Automatic ontology building is a vital issue in many fields where they are currently built manually. This paper presents a user-centred methodology for ontology construction based on the use of Machine Learning and Natural Language Processing. In our approach, the user selects a corpus of texts and sketches a preliminary ontology (or selects an existing one) for a domain with a preliminary vocabulary associated to the elements in the ontology (lexicalisations). Examples of sentences involving such lexicalisation (e.g. ISA relation) in the corpus are automatically retrieved by the system. Retrieved examples are validated by the user and used by an adaptive Information Extraction system to generate patterns that discover other lexicalisations of the same objects in the ontology, possibly identifying new concepts or relations. New instances are added to the existing ontology or used to tune it. This process is repeated until a satisfactory ontology is obtained. The methodology largely automates the ontology construction process and the output is an ontology with an associated trained leaner to be used for further ontology modifications.
Resumo:
This thesis describes research into business user involvement in the information systems application building process. The main interest of this research is in establishing and testing techniques to quantify the relationships between identified success factors and the outcome effectiveness of 'business user development' (BUD). The availability of a mechanism to measure the levels of the success factors, and quantifiably relate them to outcome effectiveness, is important in that it provides an organisation with the capability to predict and monitor effects on BUD outcome effectiveness. This is particularly important in an era where BUD levels have risen dramatically, user centred information systems development benefits are recognised as significant, and awareness of the risks of uncontrolled BUD activity is becoming more widespread. This research targets the measurement and prediction of BUD success factors and implementation effectiveness for particular business users. A questionnaire instrument and analysis technique has been tested and developed which constitutes a tool for predicting and monitoring BUD outcome effectiveness, and is based on the BUDES (Business User Development Effectiveness and Scope) research model - which is introduced and described in this thesis. The questionnaire instrument is designed for completion by 'business users' - the target community being more explicitly defined as 'people who primarily have a business role within an organisation'. The instrument, named BUD ESP (Business User Development Effectiveness and Scope Predictor), can readily be used with survey participants, and has been shown to give meaningful and representative results.
Resumo:
The current study aimed to exploit the electrostatic associative interaction between carrageenan and gelatin to optimise a formulation of lyophilised orally disintegrating tablets (ODTs) suitable for multiparticulate delivery. A central composite face centred (CCF) design was applied to study the influence of formulation variables (gelatin, carrageenan and alanine concentrations) on the crucial responses of the formulation (disintegration time, hardness, viscosity and pH). The disintegration time and viscosity were controlled by the associative interaction between gelatin and carrageenan upon hydration which forms a strong complex that increases the viscosity of the stock solution and forms tablet with higher resistant to disintegration in aqueous medium. Therefore, the levels of carrageenan, gelatin and their interaction in the formulation were the significant factors. In terms of hardness, increasing gelatin and alanine concentration was the most effective way to improve tablet hardness. Accordingly, optimum concentrations of these excipients were needed to find the best balance that fulfilled all formulation requirements. The revised model showed high degree of predictability and optimisation reliability and therefore was successful in developing an ODT formulation with optimised properties that were able deliver enteric coated multiparticulates of omeprazole without compromising their functionality.
Resumo:
The present scarcity of operational knowledge-based systems (KBS) has been attributed, in part, to an inadequate consideration shown to user interface design during development. From a human factors perspective the problem has stemmed from an overall lack of user-centred design principles. Consequently the integration of human factors principles and techniques is seen as a necessary and important precursor to ensuring the implementation of KBS which are useful to, and usable by, the end-users for whom they are intended. Focussing upon KBS work taking place within commercial and industrial environments, this research set out to assess both the extent to which human factors support was presently being utilised within development, and the future path for human factors integration. The assessment consisted of interviews conducted with a number of commercial and industrial organisations involved in KBS development; and a set of three detailed case studies of individual KBS projects. Two of the studies were carried out within a collaborative Alvey project, involving the Interdisciplinary Higher Degrees Scheme (IHD) at the University of Aston in Birmingham, BIS Applied Systems Ltd (BIS), and the British Steel Corporation. This project, which had provided the initial basis and funding for the research, was concerned with the application of KBS to the design of commercial data processing (DP) systems. The third study stemmed from involvement on a KBS project being carried out by the Technology Division of the Trustees Saving Bank Group plc. The preliminary research highlighted poor human factors integration. In particular, there was a lack of early consideration of end-user requirements definition and user-centred evaluation. Instead concentration was given to the construction of the knowledge base and prototype evaluation with the expert(s). In response to this identified problem, a set of methods was developed that was aimed at encouraging developers to consider user interface requirements early on in a project. These methods were then applied in the two further projects, and their uptake within the overall development process was monitored. Experience from the two studies demonstrated that early consideration of user interface requirements was both feasible, and instructive for guiding future development work. In particular, it was shown a user interface prototype could be used as a basis for capturing requirements at the functional (task) level, and at the interface dialogue level. Extrapolating from this experience, a KBS life-cycle model is proposed which incorporates user interface design (and within that, user evaluation) as a largely parallel, rather than subsequent, activity to knowledge base construction. Further to this, there is a discussion of several key elements which can be seen as inhibiting the integration of human factors within KBS development. These elements stem from characteristics of present KBS development practice; from constraints within the commercial and industrial development environments; and from the state of existing human factors support.
Resumo:
The basic construction concepts of many-valued intellectual systems, which are adequate to primal problems of person activity and using hybrid tools with many-valued of coding are considered. The many-valued intellectual systems being two-place, but simulating neuron processes of space toting which are different on a level of actions, inertial and threshold of properties of neurons diaphragms, and also modification of frequency of following of the transmitted messages are created. All enumerated properties and functions in point of fact are essential not only are discrete on time, but also many-valued.
Resumo:
The basic construction concepts of many-valued intellectual systems, which are adequate to primal problems of person activity and using hybrid tools with many-valued intellectual systems being two-place, but simulating neuron processes of space toting which are different on a level of actions, inertial and threshold of properties of neuron diaphragms, and also frequency modification of the following transmitted messages are created. All enumerated properties and functions in point of fact are essential not only are discrete on time, but also many-valued.
Resumo:
The “Nash program” initiated by Nash (Econometrica 21:128–140, 1953) is a research agenda aiming at representing every axiomatically determined cooperative solution to a game as a Nash outcome of a reasonable noncooperative bargaining game. The L-Nash solution first defined by Forgó (Interactive Decisions. Lecture Notes in Economics and Mathematical Systems, vol 229. Springer, Berlin, pp 1–15, 1983) is obtained as the limiting point of the Nash bargaining solution when the disagreement point goes to negative infinity in a fixed direction. In Forgó and Szidarovszky (Eur J Oper Res 147:108–116, 2003), the L-Nash solution was related to the solution of multiciteria decision making and two different axiomatizations of the L-Nash solution were also given in this context. In this paper, finite bounds are established for the penalty of disagreement in certain special two-person bargaining problems, making it possible to apply all the implementation models designed for Nash bargaining problems with a finite disagreement point to obtain the L-Nash solution as well. For another set of problems where this method does not work, a version of Rubinstein’s alternative offer game (Econometrica 50:97–109, 1982) is shown to asymptotically implement the L-Nash solution. If penalty is internalized as a decision variable of one of the players, then a modification of Howard’s game (J Econ Theory 56:142–159, 1992) also implements the L-Nash solution.
Resumo:
Police often use facial composites during their investigations, yet research suggests that facial composites are generally not effective. The present research included two experiments on facial composites. The first experiment was designed to test the usefulness of the encoding specificity principle for determining when facial composites will be effective. Instructions were used to encourage holistic or featural cues at encoding. The method used to construct facial composites was manipulated to encourage holistic or featural cues at retrieval. The encoding specificity principle suggests that an interaction effect should occur. If the same cues are used at encoding and retrieval, better composites should be constructed than when the cues are not the same. However, neither the expected interaction nor the main effects for encoding and retrieval were significant. The second study was conducted to assess the effectiveness of composites generated by two different facial composite construction systems, E-Fit and Mac-A-Mug Pro. These systems differ in that the E-Fit system uses more sophisticated methods of composite construction and may construct better quality facial composites. A comparison of E-Fit and Mac-A-Mug Pro composites demonstrated that E-Fit composites were of better quality than Mac-A-Mug Pro composites. However, neither E-Fit nor Mac-A-Mug Pro composites were useful for identifying the target person from a photograph lineup. Further, lineup performance was at floor level such that both E-Fit and Mac-A-Mug Pro composites were no more useful than a verbal description. Possible limitations of the studies are discussed, as well as suggestions for future research. ^
Resumo:
The ability for the citizens of a nation to determine their own representation has long been regarded as one of the most critical objectives of any electoral system. Without having the assurance of equality in representation, the fundamental nature and operation of the political system is severely undermined. Given the centuries of institutional reforms and population changes in the American system, Congressional Redistricting stands as an institution whereby this promise of effective representation can either be fulfilled or denied. The broad set of processes that encapsulate Congres- sional Redistricting have been discussed, experimented, and modified to achieve clear objectives and have long been understood to be important. Questions remain about how the dynamics which link all of these processes operate and what impact the real- ities of Congressional Redistricting hold for representation in the American system. This dissertation examines three aspects of how Congressional Redistricting in the Untied States operates in accordance with the principle of “One Person, One Vote.” By utilizing data and data analysis techniques of Geographic Information Systems (GIS), this dissertation seeks to address how Congressional Redistricting impacts the principle of one person, one vote from the standpoint of legislator accountability, redistricting institutions, and the promise of effective minority representation.
Resumo:
Economic policy-making has long been more integrated than social policy-making in part because the statistics and much of the analysis that supports economic policy are based on a common conceptual framework – the system of national accounts. People interested in economic analysis and economic policy share a common language of communication, one that includes both concepts and numbers. This paper examines early attempts to develop a system of social statistics that would mirror the system of national accounts, particular the work on the development of social accounts that took place mainly in the 60s and 70s. It explores the reasons why these early initiatives failed but argues that the preconditions now exist to develop a new conceptual framework to support integrated social statistics – and hence a more coherent, effective social policy. Optimism is warranted for two reasons. First, we can make use of the radical transformation that has taken place in information technology both in processing data and in providing wide access to the knowledge that can flow from the data. Second, the conditions exist to begin to shift away from the straight jacket of government-centric social statistics, with its implicit assumption that governments must be the primary actors in finding solutions to social problems. By supporting the decision-making of all the players (particularly individual citizens) who affect social trends and outcomes, we can start to move beyond the sterile, ideological discussions that have dominated much social discourse in the past and begin to build social systems and structures that evolve, almost automatically, based on empirical evidence of ‘what works best for whom’. The paper describes a Canadian approach to developing a framework, or common language, to support the evolution of an integrated, citizen-centric system of social statistics and social analysis. This language supports the traditional social policy that we have today; nothing is lost. However, it also supports a quite different social policy world, one where individual citizens and families (not governments) are seen as the central players – a more empirically-driven world that we have referred to as the ‘enabling society’.
Resumo:
Person re-identification involves recognizing a person across non-overlapping camera views, with different pose, illumination, and camera characteristics. We propose to tackle this problem by training a deep convolutional network to represent a person’s appearance as a low-dimensional feature vector that is invariant to common appearance variations encountered in the re-identification problem. Specifically, a Siamese-network architecture is used to train a feature extraction network using pairs of similar and dissimilar images. We show that use of a novel multi-task learning objective is crucial for regularizing the network parameters in order to prevent over-fitting due to the small size the training dataset. We complement the verification task, which is at the heart of re-identification, by training the network to jointly perform verification, identification, and to recognise attributes related to the clothing and pose of the person in each image. Additionally, we show that our proposed approach performs well even in the challenging cross-dataset scenario, which may better reflect real-world expected performance.
Resumo:
Abstract. Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each person's assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.
Resumo:
Abstract. Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each person's assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.