716 resultados para best-practice guidelines
Resumo:
In many respects, Australian boards more closely approach normative best practice guidelines for corporate governance than boards in other Western countries. Do Australian firms then demonstrate a board demographic-organisational performance link that has not been found in other economies? We examine the relationships between board demographics and corporate performance in 348 of Australia's largest publicly listed companies and describe the attributes of these firms and their boards. We find that, after controlling for firm size, board size is positively correlated with firm value. We also find a positive relationship between the proportion of inside directors and the market-based measure of firm performance. We discuss the implications of these findings and compare our findings to prevailing research in the US and the UK.
Resumo:
NanoImpactNet (NIN) is a multidisciplinary European Commission funded network on the environmental, health and safety (EHS) impact of nanomaterials. The 24 founding scientific institutes are leading European research groups active in the fields of nanosafety, nanorisk assessment and nanotoxicology. This 4−year project is the new focal point for information exchange within the research community. Contact with other stakeholders is vital and their needs are being surveyed. NIN is communicating with 100s of stakeholders: businesses; internet platforms; industry associations; regulators; policy makers; national ministries; international agencies; standard−setting bodies and NGOs concerned by labour rights, EHS or animal welfare. To improve this communication, internet research, a questionnaire distributed via partners and targeted phone calls were used to identify stakeholders' interests and needs. Knowledge gaps and the necessity for further data mentioned by representatives of all stakeholder groups in the targeted phone calls concerned: potential toxic and safety hazards of nanomaterials throughout their lifecycles; fate and persistence of nanoparticles in humans, animals and the environment; risks associated to nanoparticle exposure; participation in the preparation of nomenclature, standards, methodologies, protocols and benchmarks; development of best practice guidelines; voluntary schemes on responsibility; databases of materials, research topics and themes. Findings show that stakeholders and NIN researchers share very similar knowledge needs, and that open communication and free movement of knowledge will benefit both researchers and industry. Consequently NIN will encourage stakeholders to be active members. These survey findings will be used to improve NIN's communication tools to further build on interdisciplinary relationships towards a healthy future with nanotechnology.
Resumo:
Best Practice Guidelines
Resumo:
NanoImpactNet (NIN) is a multidisciplinary European Commission funded network on the environmental, health and safety (EHS) impact of nanomaterials. The 24 founding scientific institutes are leading European research groups active in the fields of nanosafety, nanorisk assessment and nanotoxicology. This 4-year project is the new focal point for information exchange within the research community. Contact with other stakeholders is vital and their needs are being surveyed. NIN is communicating with 100s of stakeholders: businesses; internet platforms; industry associations; regulators; policy makers; national ministries; international agencies; standard-setting bodies and NGOs concerned by labour rights, EHS or animal welfare. To improve this communication, internet research, a questionnaire distributed via partners and targeted phone calls were used to identify stakeholders' interests and needs. Knowledge gaps and the necessity for further data mentioned by representatives of all stakeholder groups in the targeted phone calls concerned: • the potential toxic and safety hazards of nanomaterials throughout their lifecycles; • the fate and persistence of nanoparticles in humans, animals and the environment; • the associated risks of nanoparticle exposure; • greater participation in: the preparation of nomenclature, standards, methodologies, protocols and benchmarks; • the development of best practice guidelines; • voluntary schemes on responsibility; • databases of materials, research topics and themes, but also of expertise. These findings suggested that stakeholders and NIN researchers share very similar knowledge needs, and that open communication and free movement of knowledge will benefit both researchers and industry. Subsequently a workshop was organised by NIN focused on building a sustainable multi-stakeholder dialogue. Specific questions were asked to different stakeholder groups to encourage discussions and open communication. 1. What information do stakeholders need from researchers and why? The discussions about this question confirmed the needs identified in the targeted phone calls. 2. How to communicate information? While it was agreed that reporting should be enhanced, commercial confidentiality and economic competition were identified as major obstacles. It was recognised that expertise was needed in the areas of commercial law and economics for a wellinformed treatment of this communication issue. 3. Can engineered nanomaterials be used safely? The idea that nanomaterials are probably safe because some of them have been produced 'for a long time', was questioned, since many materials in common use have been proved to be unsafe. The question of safety is also about whether the public has confidence. New legislation like REACH could help with this issue. Hazards do not materialise if exposure can be avoided or at least significantly reduced. Thus, there is a need for information on what can be regarded as acceptable levels of exposure. Finally, it was noted that there is no such thing as a perfectly safe material but only boundaries. At this moment we do not know where these boundaries lie. The matter of labelling of products containing nanomaterials was raised, as in the public mind safety and labelling are connected. This may need to be addressed since the issue of nanomaterials in food, drink and food packaging may be the first safety issue to attract public and media attention, and this may have an impact on 'nanotechnology as a whole. 4. Do we need more or other regulation? Any decision making process should accommodate the changing level of uncertainty. To address the uncertainties, adaptations of frameworks such as REACH may be indicated for nanomaterials. Regulation is often needed even if voluntary measures are welcome because it mitigates the effects of competition between industries. Data cannot be collected on voluntary bases for example. NIN will continue with an active stakeholder dialogue to further build on interdisciplinary relationships towards a healthy future with nanotechnology.
Resumo:
OBJECTIVE: To evaluate the effectiveness of a complex intervention implementing best practice guidelines recommending clinicians screen and counsel young people across multiple psychosocial risk factors, on clinicians' detection of health risks and patients' risk taking behaviour, compared to a didactic seminar on young people's health. DESIGN: Pragmatic cluster randomised trial where volunteer general practices were stratified by postcode advantage or disadvantage score and billing type (private, free national health, community health centre), then randomised into either intervention or comparison arms using a computer generated random sequence. Three months post-intervention, patients were recruited from all practices post-consultation for a Computer Assisted Telephone Interview and followed up three and 12 months later. Researchers recruiting, consenting and interviewing patients and patients themselves were masked to allocation status; clinicians were not. SETTING: General practices in metropolitan and rural Victoria, Australia. PARTICIPANTS: General practices with at least one interested clinician (general practitioner or nurse) and their 14-24 year old patients. INTERVENTION: This complex intervention was designed using evidence based practice in learning and change in clinician behaviour and general practice systems, and included best practice approaches to motivating change in adolescent risk taking behaviours. The intervention involved training clinicians (nine hours) in health risk screening, use of a screening tool and motivational interviewing; training all practice staff (receptionists and clinicians) in engaging youth; provision of feedback to clinicians of patients' risk data; and two practice visits to support new screening and referral resources. Comparison clinicians received one didactic educational seminar (three hours) on engaging youth and health risk screening. OUTCOME MEASURES: Primary outcomes were patient report of (1) clinician detection of at least one of six health risk behaviours (tobacco, alcohol and illicit drug use, risks for sexually transmitted infection, STI, unplanned pregnancy, and road risks); and (2) change in one or more of the six health risk behaviours, at three months or at 12 months. Secondary outcomes were likelihood of future visits, trust in the clinician after exit interview, clinician detection of emotional distress and fear and abuse in relationships, and emotional distress at three and 12 months. Patient acceptability of the screening tool was also described for the intervention arm. Analyses were adjusted for practice location and billing type, patients' sex, age, and recruitment method, and past health risks, where appropriate. An intention to treat analysis approach was used, which included multilevel multiple imputation for missing outcome data. RESULTS: 42 practices were randomly allocated to intervention or comparison arms. Two intervention practices withdrew post allocation, prior to training, leaving 19 intervention (53 clinicians, 377 patients) and 21 comparison (79 clinicians, 524 patients) practices. 69% of patients in both intervention (260) and comparison (360) arms completed the 12 month follow-up. Intervention clinicians discussed more health risks per patient (59.7%) than comparison clinicians (52.7%) and thus were more likely to detect a higher proportion of young people with at least one of the six health risk behaviours (38.4% vs 26.7%, risk difference [RD] 11.6%, Confidence Interval [CI] 2.93% to 20.3%; adjusted odds ratio [OR] 1.7, CI 1.1 to 2.5). Patients reported less illicit drug use (RD -6.0, CI -11 to -1.2; OR 0·52, CI 0·28 to 0·96), and less risk for STI (RD -5.4, CI -11 to 0.2; OR 0·66, CI 0·46 to 0·96) at three months in the intervention relative to the comparison arm, and for unplanned pregnancy at 12 months (RD -4.4; CI -8.7 to -0.1; OR 0·40, CI 0·20 to 0·80). No differences were detected between arms on other health risks. There were no differences on secondary outcomes, apart from a greater detection of abuse (OR 13.8, CI 1.71 to 111). There were no reports of harmful events and intervention arm youth had high acceptance of the screening tool. CONCLUSIONS: A complex intervention, compared to a simple educational seminar for practices, improved detection of health risk behaviours in young people. Impact on health outcomes was inconclusive. Technology enabling more efficient, systematic health-risk screening may allow providers to target counselling toward higher risk individuals. Further trials require more power to confirm health benefits. TRIAL REGISTRATION: ISRCTN.com ISRCTN16059206.
Resumo:
Cette étude avait pour but d’évaluer, à partir d’un processus de co-construction avec les personnes concernées, dans un contexte de 1ère ligne, la mise en application d’interventions infirmières inspirées du Guide des meilleures pratiques de soins pour les endeuillés (GMPSE) auprès d’un couple ayant vécu une perte périnatale au cours des six dernier mois. Un devis de recherche d’étude de cas basé sur la démarche d’évaluation de la quatrième génération de Guba et Lincoln (1989) a été utilisé. Une infirmière expérimentée auprès des familles endeuillées, s’est inspirée de la guidance du GMPSE pour intervenir auprès d’un couple lors de cinq rencontres thérapeutiques, dont quatre ont été précédées d’une entrevue avec les personnes concernées . Ces entrevues ont permis à ces personnes d’identifier ensemble les interventions les plus utiles et les moins utiles. Le verbatim des rencontres et entrevues ont été enregistrées et transcrites à des fins d’analyses qualitatives, Les résultats de ces analyses font ressortir la pertinence des interventions inspirées du GMPSE et l’apport spécifique de la pratique infirmière auprès de la population visée. Il appert que la sensibilisation des décideurs et des cliniciens aux enjeux des personnes endeuilles soit nécessaire pour favoriser l’implantation du Guide dans les milieux de soins. Enfin, une meilleure appropriation du GMPSE est recommandée autant dans le cadre de la formation, que de la recherche et de la pratique en sciences infirmières.
Resumo:
La Fibrosis Quística es la enfermedad autosómica recesiva mas frecuente en caucásicos. En Colombia no se conoce la incidencia de la enfermedad, pero investigaciones del grupo de la Universidad del Rosario indican que podría ser relativamente alta. Objetivo: Determinar la incidencia de afectados por Fibrosis Quística en una muestra de recién nacidos de la ciudad de Bogotá. Metodología: Se analizan 8.297 muestras de sangre de cordón umbilical y se comparan tres protocolos de tamizaje neonatal: TIR/TIR, TIR/DNA y TIR/DNA/TIR. Resultados: El presente trabajo muestra una incidencia de 1 en 8.297 afectados en la muestra analizada. Conclusiones: Dada la relativamente alta incidencia demostrada en Bogotá, se justifica la implementación de Tamizaje Neonatal para Fibrosis Quística en Colombia.
Resumo:
ANTECEDENTES: El aislamiento de células fetales libres o ADN fetal en sangre materna abre una ventana de posibilidades diagnósticas no invasivas para patologías monogénicas y cromosómicas, además de permitir la identificación del sexo y del RH fetal. Actualmente existen múltiples estudios que evalúan la eficacia de estos métodos, mostrando resultados costo-efectivos y de menor riesgo que el estándar de oro. Este trabajo describe la evidencia encontrada acerca del diagnóstico prenatal no invasivo luego de realizar una revisión sistemática de la literatura. OBJETIVOS: El objetivo de este estudio fue reunir la evidencia que cumpla con los criterios de búsqueda, en el tema del diagnóstico fetal no invasivo por células fetales libres en sangre materna para determinar su utilidad diagnóstica. MÉTODOS: Se realizó una revisión sistemática de la literatura con el fin de determinar si el diagnóstico prenatal no invasivo por células fetales libres en sangre materna es efectivo como método de diagnóstico. RESULTADOS: Se encontraron 5,893 artículos que cumplían con los criterios de búsqueda; 67 cumplieron los criterios de inclusión: 49.3% (33/67) correspondieron a estudios de corte transversal, 38,8% (26/67) a estudios de cohortes y el 11.9% (8/67) a estudios casos y controles. Se obtuvieron resultados de sensibilidad, especificidad y tipo de prueba. CONCLUSIÓN: En la presente revisión sistemática, se evidencia como el diagnóstico prenatal no invasivo es una técnica feasible, reproducible y sensible para el diagnóstico fetal, evitando el riesgo de un diagnóstico invasivo.
Resumo:
CONCLUSION Bone conduction implants are useful in patients with conductive and mixed hearing loss for whom conventional surgery or hearing aids are no longer an option. They may also be used in patients affected by single-sided deafness. OBJECTIVES To establish a consensus on the quality standards required for centers willing to create a bone conduction implant program. METHOD To ensure a consistently high level of service and to provide patients with the best possible solution the members of the HEARRING network have established a set of quality standards for bone conduction implants. These standards constitute a realistic minimum attainable by all implant clinics and should be employed alongside current best practice guidelines. RESULTS Fifteen items are thoroughly analyzed. They include team structure, accommodation and clinical facilities, selection criteria, evaluation process, complete preoperative and surgical information, postoperative fitting and assessment, follow-up, device failure, clinical management, transfer of care and patient complaints.
Resumo:
Carbon (C) and nitrogen (N) process-based models are important tools for estimating and reporting greenhouse gas emissions and changes in soil C stocks. There is a need for continuous evaluation, development and adaptation of these models to improve scientific understanding, national inventories and assessment of mitigation options across the world. To date, much of the information needed to describe different processes like transpiration, photosynthesis, plant growth and maintenance, above and below ground carbon dynamics, decomposition and nitrogen mineralization. In ecosystem models remains inaccessible to the wider community, being stored within model computer source code, or held internally by modelling teams. Here we describe the Global Research Alliance Modelling Platform (GRAMP), a web-based modelling platform to link researchers with appropriate datasets, models and training material. It will provide access to model source code and an interactive platform for researchers to form a consensus on existing methods, and to synthesize new ideas, which will help to advance progress in this area. The platform will eventually support a variety of models, but to trial the platform and test the architecture and functionality, it was piloted with variants of the DNDC model. The intention is to form a worldwide collaborative network (a virtual laboratory) via an interactive website with access to models and best practice guidelines; appropriate datasets for testing, calibrating and evaluating models; on-line tutorials and links to modelling and data provider research groups, and their associated publications. A graphical user interface has been designed to view the model development tree and access all of the above functions.
Resumo:
Objective: To assess and explain deviations from recommended practice in National Institute for Clinical Excellence (NICE) guidelines in relation to fetal heart monitoring. Design: Qualitative study. Setting: Large teaching hospital in the UK. Sample: Sixty-six hours of observation of 25 labours and interviews with 20 midwives of varying grades. Methods: Structured observations of labour and semistructured interviews with midwives. Interviews were undertaken using a prompt guide, audiotaped, and transcribed verbatim. Analysis was based on the constant comparative method, assisted by QSR N5 software. Main outcome measures: Deviations from recommended practice in relation to fetal monitoring and insights into why these occur. Results: All babies involved in the study were safely delivered, but 243 deviations from recommended practice in relation to NICE guidelines on fetal monitoring were identified, with the majority (80%) of these occurring in relation to documentation. Other deviations from recommended practice included indications for use of electronic fetal heart monitoring and conduct of fetal heart monitoring. There is evidence of difficulties with availability and maintenance of equipment, and some deficits in staff knowledge and skill. Differing orientations towards fetal monitoring were reported by midwives, which were likely to have impacts on practice. The initiation, management, and interpretation of fetal heart monitoring is complex and distributed across time, space, and professional boundaries, and practices in relation to fetal heart monitoring need to be understood within an organisational and social context. Conclusion: Some deviations from best practice guidelines may be rectified through straightforward interventions including improved systems for managing equipment and training. Other deviations from recommended practice need to be understood as the outcomes of complex processes that are likely to defy easy resolution. © RCOG 2006.
Resumo:
Aims: To survey eye care practitioners from around the world regarding their current practice for anterior eye health recording to inform guidelines on best practice. Methods: The on-line survey examined the reported use of: word descriptions, sketching, grading scales or photographs; paper or computerised record cards and whether these were guided by proforma headings; grading scale choice, signs graded, level of precision, regional grading; and how much time eye care practitioners spent on average on anterior eye health recording. Results: Eight hundred and nine eye care practitioners from across the world completed the survey. Word description (p <. 0.001), sketches (p = 0.002) and grading scales (p <. 0.001) were used more for recording the anterior eye health of contact lens patients than other patients, but photography was used similarly (p = 0.132). Of the respondents, 84.5% used a grading scale, 13.5% using two, with the original Efron (51.6%) and CCLRU/Brien-Holden-Vision-Institute (48.5%) being the most popular. The median features graded was 11 (range 1-23), frequency from 91.6% (bulbar hyperaemia) to 19.6% (endothelial blebs), with most practitioners grading to the nearest unit (47.4%) and just 14.7% to one decimal place. The average time taken to report anterior eye health was reported to be 6.8. ±. 5.7. min, with the maximum time available 14.0. ±. 11. min. Conclusions: Developed practice and research evidence allows best practice guidelines for anterior eye health recording to be recommended. It is recommended to: record which grading scale is used; always grade to one decimal place, record what you see live rather than based on how you intend to manage a condition; grade bulbar and limbal hyperaemia, limbal neovascularisation, conjunctival papillary redness and roughness (in white light to assess colouration with fluorescein instilled to aid visualisation of papillae/follicles), blepharitis, meibomian gland dysfunction and sketch staining (both corneal and conjunctival) at every visit. Record other anterior eye features only if they are remarkable, but indicate that the key tissue which have been examined.
Resumo:
Background Physical activity in children with intellectual disabilities is a neglected area of study, which is most apparent in relation to physical activity measurement research. Although objective measures, specifically accelerometers, are widely used in research involving children with intellectual disabilities, existing research is based on measurement methods and data interpretation techniques generalised from typically developing children. However, due to physiological and biomechanical differences between these populations, questions have been raised in the existing literature on the validity of generalising data interpretation techniques from typically developing children to children with intellectual disabilities. Therefore, there is a need to conduct population-specific measurement research for children with intellectual disabilities and develop valid methods to interpret accelerometer data, which will increase our understanding of physical activity in this population. Methods Study 1: A systematic review was initially conducted to increase the knowledge base on how accelerometers were used within existing physical activity research involving children with intellectual disabilities and to identify important areas for future research. A systematic search strategy was used to identify relevant articles which used accelerometry-based monitors to quantify activity levels in ambulatory children with intellectual disabilities. Based on best practice guidelines, a novel form was developed to extract data based on 17 research components of accelerometer use. Accelerometer use in relation to best practice guidelines was calculated using percentage scores on a study-by-study and component-by-component basis. Study 2: To investigate the effect of data interpretation methods on the estimation of physical activity intensity in children with intellectual disabilities, a secondary data analysis was conducted. Nine existing sets of child-specific ActiGraph intensity cut points were applied to accelerometer data collected from 10 children with intellectual disabilities during an activity session. Four one-way repeated measures ANOVAs were used to examine differences in estimated time spent in sedentary, moderate, vigorous, and moderate to vigorous intensity activity. Post-hoc pairwise comparisons with Bonferroni adjustments were additionally used to identify where significant differences occurred. Study 3: The feasibility on a laboratory-based calibration protocol developed for typically developing children was investigated in children with intellectual disabilities. Specifically, the feasibility of activities, measurements, and recruitment was investigated. Five children with intellectual disabilities and five typically developing children participated in 14 treadmill-based and free-living activities. In addition, resting energy expenditure was measured and a treadmill-based graded exercise test was used to assess cardiorespiratory fitness. Breath-by-breath respiratory gas exchange and accelerometry were continually measured during all activities. Feasibility was assessed using observations, activity completion rates, and respiratory data. Study 4: Thirty-six children with intellectual disabilities participated in a semi-structured school-based physical activity session to calibrate accelerometry for the estimation of physical activity intensity. Participants wore a hip-mounted ActiGraph wGT3X+ accelerometer, with direct observation (SOFIT) used as the criterion measure. Receiver operating characteristic curve analyses were conducted to determine the optimal accelerometer cut points for sedentary, moderate, and vigorous intensity physical activity. Study 5: To cross-validate the calibrated cut points and compare classification accuracy with existing cut points developed in typically developing children, a sub-sample of 14 children with intellectual disabilities who participated in the school-based sessions, as described in Study 4, were included in this study. To examine the validity, classification agreement was investigated between the criterion measure of SOFIT and each set of cut points using sensitivity, specificity, total agreement, and Cohen’s kappa scores. Results Study 1: Ten full text articles were included in this review. The percentage of review criteria met ranged from 12%−47%. Various methods of accelerometer use were reported, with most use decisions not based on population-specific research. A lack of measurement research, specifically the calibration/validation of accelerometers for children with intellectual disabilities, is limiting the ability of researchers to make appropriate and valid accelerometer use decisions. Study 2: The choice of cut points had significant and clinically meaningful effects on the estimation of physical activity intensity and sedentary behaviour. For the 71-minute session, estimations for time spent in each intensity between cut points ranged from: sedentary = 9.50 (± 4.97) to 31.90 (± 6.77) minutes; moderate = 8.10 (± 4.07) to 40.40 (± 5.74) minutes; vigorous = 0.00 (± .00) to 17.40 (± 6.54) minutes; and moderate to vigorous = 8.80 (± 4.64) to 46.50 (± 6.02) minutes. Study 3: All typically developing participants and one participant with intellectual disabilities completed the protocol. No participant met the maximal criteria for the graded exercise test or attained a steady state during the resting measurements. Limitations were identified with the usability of respiratory gas exchange equipment and the validity of measurements. The school-based recruitment strategy was not effective, with a participation rate of 6%. Therefore, a laboratory-based calibration protocol was not feasible for children with intellectual disabilities. Study 4: The optimal vertical axis cut points (cpm) were ≤ 507 (sedentary), 1008−2300 (moderate), and ≥ 2301 (vigorous). Sensitivity scores ranged from 81−88%, specificity 81−85%, and AUC .87−.94. The optimal vector magnitude cut points (cpm) were ≤ 1863 (sedentary), ≥ 2610 (moderate) and ≥ 4215 (vigorous). Sensitivity scores ranged from 80−86%, specificity 77−82%, and AUC .86−.92. Therefore, the vertical axis cut points provide a higher level of accuracy in comparison to the vector magnitude cut points. Study 5: Substantial to excellent classification agreement was found for the calibrated cut points. The calibrated sedentary cut point (ĸ =.66) provided comparable classification agreement with existing cut points (ĸ =.55−.67). However, the existing moderate and vigorous cut points demonstrated low sensitivity (0.33−33.33% and 1.33−53.00%, respectively) and disproportionately high specificity (75.44−.98.12% and 94.61−100.00%, respectively), indicating that cut points developed in typically developing children are too high to accurately classify physical activity intensity in children with intellectual disabilities. Conclusions The studies reported in this thesis are the first to calibrate and validate accelerometry for the estimation of physical activity intensity in children with intellectual disabilities. In comparison with typically developing children, children with intellectual disabilities require lower cut points for the classification of moderate and vigorous intensity activity. Therefore, generalising existing cut points to children with intellectual disabilities will underestimate physical activity and introduce systematic measurement error, which could be a contributing factor to the low levels of physical activity reported for children with intellectual disabilities in previous research.
Resumo:
When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.
Resumo:
While catch-and-release (C&R) is a well-known practice in several European freshwater recreational fisheries, studies on the magnitude and impact of this practice in Europeanmarine recreational fisheries are limited. To provide an overview of the practice andmagnitude of C&R among marine recreational anglers in Europe, the existing knowledge of C&R and its potential associated release mortality was collected andsummarized. The present study revealed that in several European countries over half of the total recreational catch is released by marine anglers. High release proportions of > 60% were found for Atlantic cod (Gadus morhua), European sea bass (Dicentrarchus labrax), pollack (Pollachius pollachius), and sea trout (Salmo trutta) in at least one of the studied European countries. In the case of the German recreational Baltic Sea cod fishery, release proportions varied considerably between years, presumably tracking a strong year class ofundersized fish. Reasons for release varied between countries and species, and included legal restrictions (e.g. minimumlanding sizes and daily bag limits) and voluntary C&R. Considering the magnitude of C&R practice among European marine recreational anglers, post-release mortalities of released fish may need to be accounted for in estimated fishingmortalities.However, as the survival rates of Europeanmarine species aremostly unknown, there is a need to conduct post-release survival studies and to identify factors affecting post-release survival. Such studies could also assist in developing species-specific, best-practice guidelines to minimize the impacts of C&R on released marine fish in Europe.