722 resultados para systematic methods


Relevância:

70.00% 70.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Postprint

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.

Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dynamism and uncertainty are real challenges for present day manufacturing enterprises (MEs). Reasons include: an increasing demand for customisation, reduced time to market, shortened product life cycles and globalisation. MEs can reduce competitive pressure by becoming reconfigurable and change-capable. However, modern manufacturing philosophies, including agile and lean, must complement the application of reconfigurable manufacturing paradigms. Choosing and applying the best philosophies and techniques is very difficult as most MEs deploy complex and unique configurations of processes and resource systems, and seek economies of scope and scale in respect of changing and distinctive product flows. It follows that systematic methods of achieving model driven reconfiguration and interoperation of component based manufacturing systems are required to design, engineer and change future MEs. This thesis, titled Enhanced Integrated Modelling Approach to Reconfiguring Manufacturing Enterprises , introduces the development and prototyping a model-driven environment for the design, engineering, optimisation and control of the reconfiguration of MEs with an embedded capability to handle various types of change. The thesis describes a novel systematic approach, namely enhanced integrated modelling approach (EIMA), in which coherent sets of integrated models are created that facilitates the engineering of MEs especially their production planning and control (PPC) systems. The developed environment supports the engineering of common types of strategic, tactical and operational processes found in many MEs. The EIMA is centred on the ISO standardised CIMOSA process modelling approach. Early study led to the development of simulation models during which various CIMOSA shortcomings were observed, especially in its support for aspects of ME dynamism. A need was raised to structure and create semantically enriched models hence forming an enhanced integrated modelling environment. The thesis also presents three industrial case examples: (1) Ford Motor Company; (2) Bradgate Furniture Manufacturing Company; and (3) ACM Bearings Company. In order to understand the system prior to realisation of any PPC strategy, multiple process segments of any target organisation need to be modelled. Coherent multi-perspective case study models are presented that have facilitated process reengineering and associated resource system configuration. Such models have a capability to enable PPC decision making processes in support of the reconfiguration of MEs. During these case studies, capabilities of a number of software tools were exploited such as Arena®, Simul8®, Plant Simulation®, MS Visio®, and MS Excel®. Case study results demonstrated effectiveness of the concepts related to the EIMA. The research has resulted in new contributions to knowledge in terms of new understandings, concepts and methods in following ways: (1) a structured model driven integrated approach to the design, optimisation and control of future reconfiguration of MEs. The EIMA is an enriched and generic process modelling approach with capability to represent both static and dynamic aspects of an ME; and (2) example application cases showing benefits in terms of reduction in lead time, cost and resource load and in terms of improved responsiveness of processes and resource systems with a special focus on PPC; (3) identification and industrial application of a new key performance indicator (KPI) known as P3C the measuring and monitoring of which can aid in enhancing reconfigurability and responsiveness of MEs; and (4) an enriched modelling concept framework (E-MUNE) to capture requirements of static and dynamic aspects of MEs where the conceptual framework has the capability to be extended and modified according to the requirements. The thesis outlines key areas outlining a need for future research into integrated modelling approaches, interoperation and updating mechanisms of partial models in support of the reconfiguration of MEs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: TORCH (Towards a Revolution in COPD Health) is an international multicentre, randomised, placebo-controlled clinical trial of inhaled fluticasone propionate/salmeterol combination treatment and its monotherapy components for maintenance treatment of moderately to severely impaired patients with chronic obstructive pulmonary disease (COPD). The primary outcome is all-cause mortality. Cause-specific mortality and deaths related to COPD are additional outcome measures, but systematic methods for ascertainment of these outcomes have not previously been described. Methods: A Clinical Endpoint Committee (CEC) was tasked with categorising the cause of death and the relationship of deaths to COPD in a systematic, unbiased and independent manner. The key elements of the operation of the committee were the use of predefined principles of operation and definitions of cause of death and COPD-relatedness; the independent review of cases by all members with development of a consensus opinion; and a substantial infrastructure to collect medical information. Results: 911 deaths were reviewed and consensus was reached in all. Cause-specific mortality was: cardiovascular 27%, respiratory 35%, cancer 21%, other 10% and unknown 8%. 40% of deaths were definitely or probably related to COPD. Adjudications were identical in 83% of blindly re-adjudicated cases ( = 0.80). COPD-relatedness was reproduced 84% of the time ( = 0.73). The CEC adjudication was equivalent to the primary cause of death recorded by the site investigator in 52% of cases. Conclusion: A CEC can provide standardised, reliable and informative adjudication of COPD mortality that provides information which frequently differs from data collected from assessment by site investigators.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introdução: A profusão de informação na área médica cria problemas de gestão, sendo necessários métodos sistematizados para armazenamento e recuperação. Quando a informação se insere no contexto do processo clínico, os métodos devem integrar terminologias biomédicas controladas e igualmente devem integrar as características desejáveis dirigidas à estrutura, conteúdo e resultados clínicos. O objectivo deste artigo é testar a aplicabilidade e capacidade de recuperação, de um sistema multidimensional desenvolvido para classificação e gestão de informação em saúde. Métodos: A partir das questões recebidas em seis anos (Serviço de Informação de Medicamentos, Serviços Farmacêuticos, Hospitais da Universidade de Coimbra), seleccionaram-se 300 questões sobre informação clínica, por método aleatório informatizado. Caracterizou-se e avaliou-se a aplicabilidade pela quantidade classificada e pela necessidade de alterações ao sistema que é constituído por várias dimensões independentes e que englobam conceitos por vezes hierarquizados. A recuperação das questões foi testada pesquisando informação numa dimensão ou cruzamento de dimensões. Resultados: Todas as questões foram classificadas: 53% são casos clínicos com incidência nas doenças geniturinárias; doenças metabólicas, nutricionais e endócrinas; neoplasias; infecções e doenças do sistema nervoso. Em 81%, o objecto é o medicamento, sobretudo anti-infecciosos e anti-neoplásicos. As áreas de terapêutica e segurança foram as mais solicitadas, incidindo principalmente sobre os assuntos: utilização, reacções adversas, identificação de medicamentos e tecnologia farmacêutica. Na aplicabilidade, foi necessário adicionar alguns conceitos e modificar alguns grupos hierárquicos que não modificaram a estrutura base, nem colidiram com as características desejáveis. As limitações prenderam-se com os sistemas de classificação externos integrados. A pesquisa na dimensão assunto, do conceito administração de medicamentos, recuperou 19 questões. O cruzamento de duas dimensões: anti-infecciosos (externa) e teratogenicidade (assunto), recuperou três questões. Nos dois exemplos recupera-se informação a partir de qualquer um dos níveis da hierarquia, do mais geral ao mais específico e mesmo a partir de dimensões externas. Conclusões: A utilização do sistema nesta amostra demonstrou aplicabilidade na classificação e arquivo de informação clínica, capacidade de recuperação e flexibilidade, sofrendo alterações sem interferir com as características desejáveis. Esta ferramenta permite a recuperação da evidência que interessa orientada para o doente. Introduction: The large amount of information in the medical area creates management problems, being necessary systematic methods for filing and retrieval. With information on the context of clinical records, methods must integrate controlled biomedical terminologies and desirable characteristics oriented to the structure, content and clinical results. The objective is to test the applicability and capacity for retrieval of a multidimensional system developed for classification and management of health information. Methods: Three hundred questions were randomly selected, by computerized method, from the questions received in six years (Medicine Information Service, Pharmaceutical Department, Coimbra University Hospitals). They were characterized and applicability evaluated by classified amount and need to alter the system, which is composed of various independent dimensions, incorporating concepts sometimes hierarchical. Questions retrieval was tested searching information in a dimension or between dimensions. Results: All questions were classified: 53% are clinical cases with illnesses incidence in the genitourinary system; metabolic, nutritional and endocrine disease; cancer; infections and nervous system. In 81%, the object is a drug, mostly anti-infectious and anti-neoplastic agents. The therapeutic and safety areas had been the most requested, regarding the subjects: use, adverse reactions, drug identification and pharmaceutical technology. As to applicability, it was necessary to add some concepts and modify same hierarchical groups, that didn’t modify the basic structure, nor had collided with the desirable characteristics. The limitations were related with the incorporated external classification systems. The search in the subject dimension of the concept drug administration retrieved 19 questions. The search between two dimensions: antiinfectious (external) and teratogenicity (subject) retrieved three questions. In the two examples, it was possible to retrieve information from any one of the levels of the hierarchy, from the most general to the most specific and even from external dimensions. Conclusions: The use of the system in this sample showed its applicability in clinical information classification and filing, retrieval capacity and flexibility, supporting modifications without interfering with desirable characteristics. This tool allows retrieval of patient-oriented evidence that matters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Davantage d’évaluations de projets internationaux dans les pays en développement se réalisent pour informer les décisions sur la base de données probantes. L’utilisation des résultats d’évaluation est remise en cause et pour y remédier des évaluations participatives qui incluent à certaines étapes du processus évaluatif des acteurs non évaluateurs sont proposées. Parmi celles-ci, les évaluations participatives pratiques visent principalement à améliorer l’utilisation du processus et des résultats des évaluations. Ces évaluations participatives pratiques seraient obstruées par des attitudes individuelles négatives, ou résistance individuelle au changement, et favorisées par des attitudes individuelles positives, ou propension. Cette thèse propose d’étudier la propension individuelle des gestionnaires envers les évaluations participatives pratiques d’intervention (EPP), les éléments influençant cette propension, et de caractériser des niveaux de propension des individus envers les EPP. Tout d’abord une revue de littérature a proposé une définition multidimensionnelle de la propension envers les EPP comme étant une attitude favorable envers la pratique des EPP qui se décline à chaque étape d’une évaluation sous les volets affectif et cognitif. Les dimensions identifiées théoriquement étaient : apprentissage, travail en groupe, emploi de méthodes systématiques, usage de l’esprit critique. Ces dimensions ont servi de cadre pour la partie empirique de la thèse. Une étude de cas multiples avec les gestionnaires d’une institution de santé en Haïti a été menée pour contextualiser la propension et identifier les éléments d’influence. Les données ont été recueillies à l’aide d’entrevues semi-structurées et de sources documentaires. L’analyse des données concernant l’apprentissage a révélé une prédominance des formes d’apprentissage par l’action et par l’observation. Le travail en groupe se retrouve ancré dans la pratique des gestionnaires administratifs et des gestionnaires cliniques. Les méthodes systématiques se reflètent principalement dans la consultation de plusieurs acteurs ayant de l’intérêt pour la problématique immédiate à solutionner plutôt que par l’outillage méthodologique. L’emploi de méthodes systématiques prend généralement la forme de consultation élargie d’avis pour régler une situation ou prend la forme de tentative de validation des informations reçues. L’esprit critique se déclenche sous stimulation lorsque l’image individuelle, professionnelle, corporative ou organisationnelle est touchée ou lors de suggestions jugées constructives. En plus de contextualiser quatre composantes de la propension individuelle envers les EPP, les gestionnaires se sont positionnés par rapport à la propension de leurs collègues sur la base de la réactivité, plus ou moins réactif vis-à-vis des composantes de la propension individuelle. Ainsi, la propension étudiée empiriquement a laissé émerger deux axes : un axe formalisation et un axe réactivité. L’axe formalisation reprend la contextualisation des quatre composantes de la propension individuelle envers les EPP, soit la forme d’expression des composantes. L’axe réactivité reprend le niveau d’activité déployé dans chaque composante de la propension individuelle, de réactif à plus proactif. De plus, des profils d’individus ayant différents niveaux de propension envers les EPP ont été développés. Des influences favorables et défavorables au niveau de propension envers les EPP ont été identifiées. L’originalité de cette thèse tient dans le fait de se positionner dans un courant récent de réflexion autour de la résistance aux changements et aux évaluations avec un regard positif et d’avoir défini théoriquement et appliqué empiriquement le concept pluridimensionnel de propension individuelle aux EPP. Des profils de niveau de propension individuelle aux EPP et les éléments d’influence favorables et défavorables associés peuvent servir d’outil de diagnostic aux types d’évaluation possibles, servir d’ajustement à la mise en place d’évaluations selon les interlocuteurs, permettre le suivi des changements de niveaux de propension pendant une EPP et servir de sources d’informations pour ajuster les plans d’évaluations participatives.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Não é uma tarefa fácil definir requisitos para os sistemas de software que darão suporte a um negócio, dada a dinâmica de mudanças nos processos. O levantamento de requisitos tem sido feito de forma empírica, sem o apoio de métodos sistematizados que garantam o desenvolvimento baseado nos reais objetivos do negócio. A engenharia de software carece de métodos que tornem mais ordenadas e metódicas as etapas de modelagem de negócios e de levantamento de requisitos de um sistema. Neste artigo é apresentada uma metodologia de desenvolvimento de software resultante da incorporação de atividades propostas para modelagem de negócios e levantamento de requisitos, baseadas em uma arquitetura de modelagem de negócios. Essas atividades tornam o desenvolvimento de software mais sistemático e alinhado aos objetivos da organização, e podem ser incorporadas em qualquer metodologia de desenvolvimento baseada no UP (Unified Process - Processo Unificado).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nineteen strains of Acidithiobacillus ferrooxidans and Acidithiobacillus thiooxidans, including 12 strains isolated from coal, copper, gold and uranium mines in Brazil, strains isolated from similar sources in other countries and the type strains of the two species were characterized together with the type strain of A. caldus by using a combination of molecular systematic methods, namely ribotyping, BOX- and ERIC-PCR and DNA-DNA hybridization assays. Data derived from the molecular fingerprinting analyses showed that the tested strains encompassed a high degree of genetic variability. Two of the Brazilian A. ferrooxidans organisms (strains SSP and PCE) isolated from acid coal mine waste and uranium mine effluent, respectively, and A. thiooxidans strain DAMS, isolated from uranium mine effluent, were the most genetically divergent organisms. The DNA-DNA hybridization data did not support the allocation of Acidithiobacillus strain SSP to the A. ferrooxidans genomic species, as it shared only just over 40% DNA relatedness with the type strain of the species. Acidithiobacillus strain SSP was not clearly related to A. ferrooxidans in the 16S rDNA tree.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Informação - FFC

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the last decade European democracies have been facing a challenge by the rising force of new populist movements. The emergence of the financial and sovereign debt crisis in Europe created new fertile soil for the strengthening of old-established – and the development of new – populist parties in several EU-member states. José Manuel Barroso, president of the European Commission, emphasized his increased unease concerning these developments when he was speaking at the annual Brussels Think Tank Forum on 22. April 2013: “I am deeply concerned about the divisions that we see emerging: political extremes and populism tearing apart the political support and the social fabric that we need to deal with the crisis; […]” (Barroso 2013). Indeed, European elites seem to be increasingly worried by these recent developments which are perceived as an impending stress test of the Union and the project of European integration as a whole (Hartleb 2013). Sure enough, the results of the recent European Parliament Elections 2014 revealed a great support for populist political parties in many societies of EU-member countries. To understand the success of populist parties in Europe it is crucial to first shed light on the nature of populist party communication itself. Significant communicative differences may explain the varying success of populist parties between and within countries, while a pure demand-side approach (i.e. a focus on the preferences of the electorate) often fails to do so (Mudde 2010). The aim of this study is therefore to analyse what different types of populist communication styles emerge during the EP election campaign 2014 and under which conditions populist communication styles are selected by political parties. So far, the empirical measurement of populism has received only scarce attention (Rooduijn & Pauwels 2011). Besides, most of the existing empirical investigations of populism are single case studies (Albertazzi & McDonnell 2008) and scholars have not yet developed systematic methods to measure populism in a comparative way (Rooduijn & Pauwels 2011). This is a consequence of a lack of conceptual clarity which goes along with populism (Taggart 2000; Barr 2009; Canovan 1999) due to its contextual sensitivity. Hence, populism in Europe should be analysed in a way that clarifies the concept of populism and moreover takes into account that the Europeanization of politics has an influence on the type of populist party communication, which is intended in the course of that study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

“Availability” is the terminology used in asset intensive industries such as petrochemical and hydrocarbons processing to describe the readiness of equipment, systems or plants to perform their designed functions. It is a measure to suggest a facility’s capability of meeting targeted production in a safe working environment. Availability is also vital as it encompasses reliability and maintainability, allowing engineers to manage and operate facilities by focusing on one performance indicator. These benefits make availability a very demanding and highly desired area of interest and research for both industry and academia. In this dissertation, new models, approaches and algorithms have been explored to estimate and manage the availability of complex hydrocarbon processing systems. The risk of equipment failure and its effect on availability is vital in the hydrocarbon industry, and is also explored in this research. The importance of availability encouraged companies to invest in this domain by putting efforts and resources to develop novel techniques for system availability enhancement. Most of the work in this area is focused on individual equipment compared to facility or system level availability assessment and management. This research is focused on developing an new systematic methods to estimate system availability. The main focus areas in this research are to address availability estimation and management through physical asset management, risk-based availability estimation strategies, availability and safety using a failure assessment framework, and availability enhancement using early equipment fault detection and maintenance scheduling optimization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction: Family members including children are all impacted by a family member’s mental illness. Although mental health services are increasingly encouraged to engage in family-focused practice, this is not a well-understood concept or practice in mental health care. Methods: An integrative review using systematic methods was conducted with international literature, with the aim of identifying concepts and practices of family-focused practice in child and youth and adult mental health services. Results: Findings from 40 peer-reviewed literature identified a range of understandings and applications of family-focused practice, including who comprises the ‘family’, whether the focus is family of origin or family of procreation or choice, and whether the context of practice is child and youth or adult. ‘Family’ as defined by its members forms the foundation for practice that aims to provide a whole-of-family approach to care. Six core practices comprise a family focus to care: assessment; psychoeducation; family care planning and goal-setting; liaison between families and services; instrumental, emotional and social support; and a coordinated system of care between families and services. Conclusion: By incorporating key principles and the core family-focused practices into their care delivery, clinicians can facilitate a whole-of-family approach to care and strengthen family members’ wellbeing and resilience, and their individual and collective health outcomes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2016.