877 resultados para Decision analysis
Resumo:
The purpose of this study was to analyze the evolution of Florida state level policy efforts and to assess the responding educational policy development and implementation at the local school district level. The focus of this study was the secondary language arts curriculum in Miami-Dade County Public Schools. Data was collected using document analysis as a source of meaning making out of the language sets proffered by agencies at each level. A matrix was created based on Klein's levels of curriculum decision-making and Functional Process Theory categories of policy formation. The matrix allowed the researcher to code and classify specific information in terms accountability/high-stakes testing; authority; outside influences; and operational/structural organization. Federal policy documents provided a background and impetus for much of what originated at the State level. The State then produced policy directives which were accepted by the District and specific policy directives and guidelines for practice. No evidence was found indicating the involvement of any other agencies in the development, transmission or implementation of the State level initiated policies. After analyzing the evolutionary process, it became clear that state policy directives were never challenged or discussed. Rather, they were accepted as standards to be met and as such, school districts complied. Policy implementation is shown to be a top-down phenomenon. No evidence was found indicating a dialogue between state and local systems, rather the state, as the source of authority, issued specifically worded policy directives and the district complied. Finally, this study recognizes that outside influences play an important role in shaping the education reform policy in the state of Florida. The federal government, through NCLB and other initiatives created a climate which led almost naturally to the creation of the Florida A+ Plan. Similarly, the concern of the business community, always interested in the production of competent workers, continued to support efforts at raising the minimum skill level of Florida high school graduates. Suggestions are made for future research including the examination of local school sites in order to assess the overall nature of the school experience rather than rely upon performance indicators mandated by state policy.
Resumo:
In human society, people encounter various deontic conflicts every day. Deontic decisions are those that include moral, ethical, and normative aspects. Here, the concern is with deontic conflicts: decisions where all the alternatives lead to the violation of some norms. People think critically about these kinds of decisions. But, just ‘what’ they think about is not always clear. People use certain estimating factors/criteria to balance the tradeoffs when they encounter deontic conflicts. It is unclear what subjective factors people use to make a deontic decision. An elicitation approach called the Open Factor Conjoint System is proposed, which applies an online elicitation methodology which is a combination of two well-know research methodologies: repertory grid and conjoint analysis. This new methodology is extended to be a web based application. It seeks to elicit additional relevant (subjective) factors from people, which affect deontic decisions. The relative importance and utility values are used for the development of a decision model to predict people’s decisions. Fundamentally, this methodology was developed and intended to be applicable for a wide range of elicitation applications with minimal experimenter bias. Comparing with the traditional method, this online survey method reduces the limitation of time and space in data collection and this methodology can be applied in many fields. Two possible applications were addressed: robotic vehicles and the choice of medical treatment. In addition, this method can be applied to many research related disciplines in cross-cultural research due to its online ability with global capacity.
Resumo:
Subtitle D of the Resource Conservation and Recovery Act (RCRA) requires a post closure period of 30 years for non hazardous wastes in landfills. Post closure care (PCC) activities under Subtitle D include leachate collection and treatment, groundwater monitoring, inspection and maintenance of the final cover, and monitoring to ensure that landfill gas does not migrate off site or into on site buildings. The decision to reduce PCC duration requires exploration of a performance based methodology to Florida landfills. PCC should be based on whether the landfill is a threat to human health or the environment. Historically no risk based procedure has been available to establish an early end to PCC. Landfill stability depends on a number of factors that include variables that relate to operations both before and after the closure of a landfill cell. Therefore, PCC decisions should be based on location specific factors, operational factors, design factors, post closure performance, end use, and risk analysis. The question of appropriate PCC period for Florida’s landfills requires in depth case studies focusing on the analysis of the performance data from closed landfills in Florida. Based on data availability, Davie Landfill was identified as case study site for a case by case analysis of landfill stability. The performance based PCC decision system developed by Geosyntec Consultants was used for the assessment of site conditions to project PCC needs. The available data for leachate and gas quantity and quality, ground water quality, and cap conditions were evaluated. The quality and quantity data for leachate and gas were analyzed to project the levels of pollutants in leachate and groundwater in reference to maximum contaminant level (MCL). In addition, the projected amount of gas quantity was estimated. A set of contaminants (including metals and organics) were identified as contaminants detected in groundwater for health risk assessment. These contaminants were selected based on their detection frequency and levels in leachate and ground water; and their historical and projected trends. During the evaluations a range of discrepancies and problems that related to the collection and documentation were encountered and possible solutions made. Based on the results of PCC performance integrated with risk assessment, projection of future PCC monitoring needs and sustainable waste management options were identified. According to these results, landfill gas monitoring can be terminated, leachate and groundwater monitoring for parameters above MCL and surveying of the cap integrity should be continued. The parameters which cause longer monitoring periods can be eliminated for the future sustainable landfills. As a conclusion, 30 year PCC period can be reduced for some of the landfill components based on their potential impacts to human health and environment (HH&E).
Resumo:
This dissertation establishes a novel data-driven method to identify language network activation patterns in pediatric epilepsy through the use of the Principal Component Analysis (PCA) on functional magnetic resonance imaging (fMRI). A total of 122 subjects’ data sets from five different hospitals were included in the study through a web-based repository site designed here at FIU. Research was conducted to evaluate different classification and clustering techniques in identifying hidden activation patterns and their associations with meaningful clinical variables. The results were assessed through agreement analysis with the conventional methods of lateralization index (LI) and visual rating. What is unique in this approach is the new mechanism designed for projecting language network patterns in the PCA-based decisional space. Synthetic activation maps were randomly generated from real data sets to uniquely establish nonlinear decision functions (NDF) which are then used to classify any new fMRI activation map into typical or atypical. The best nonlinear classifier was obtained on a 4D space with a complexity (nonlinearity) degree of 7. Based on the significant association of language dominance and intensities with the top eigenvectors of the PCA decisional space, a new algorithm was deployed to delineate primary cluster members without intensity normalization. In this case, three distinct activations patterns (groups) were identified (averaged kappa with rating 0.65, with LI 0.76) and were characterized by the regions of: 1) the left inferior frontal Gyrus (IFG) and left superior temporal gyrus (STG), considered typical for the language task; 2) the IFG, left mesial frontal lobe, right cerebellum regions, representing a variant left dominant pattern by higher activation; and 3) the right homologues of the first pattern in Broca's and Wernicke's language areas. Interestingly, group 2 was found to reflect a different language compensation mechanism than reorganization. Its high intensity activation suggests a possible remote effect on the right hemisphere focus on traditionally left-lateralized functions. In retrospect, this data-driven method provides new insights into mechanisms for brain compensation/reorganization and neural plasticity in pediatric epilepsy.
Resumo:
Accounting students become practitioners facing ethical decision-making challenges that can be subject to various interpretations; hence, the profession is concerned with the appropriateness of their decisions. Moral development of these students has implications for a profession under legal challenges, negative publicity, and government scrutiny. Accounting students moral development has been studied by examining their responses to moral questions in Rest's Defining Issues Test (DIT), their professional attitudes on Hall's Professionalism Scale Dimensions, and their ethical orientation-based professional commitment and ethical sensitivity. This study extended research in accounting ethics and moral development by examining students in a college where an ethics course is a requirement for graduation. Knowledge of differences in the moral development of accounting students may alert practitioners and educators to potential problems resulting from a lack of ethical understanding as measured by moral development levels. If student moral development levels differ by major, and accounting majors have lower levels than other students, the conclusion may be that this difference is a causative factor for the alleged acts of malfeasance in the profession that may result in malpractice suits. The current study compared 205 accounting, business, and nonbusiness students from a private university. In addition to academic major and completion of an ethics course, the other independent variable was academic level. Gender and age were tested as control variables and Rest's DIT score was the dependent variable. The primary analysis was a 2x3x3 ANOVA with post hoc tests for results with significant p-value of less than 0.05. The results of this study reveal that students who take an ethics course appear to have a higher level of moral development (p=0.013), as measured by the (DIT), than students at the same academic level who have not taken an ethics course. In addition, a statistically significant difference (p=0.034) exists between freshmen who took an ethics class and juniors who did not take an ethics class. For every analysis except one, the lower class year with an ethics class had a higher level of moral development than the higher class year without an ethics class. These results appear to show that ethics education in particular has a greater effect on the level of moral development than education in general. Findings based on the gender specific analyses appear to show that males and females respond differently to the effects of taking an ethics class. The male students do not appear to increase their moral development level after taking an ethics course (p=0.693) but male levels of moral development differ significantly (p=0.003) by major. Female levels of moral development appear to increase after taking an ethics course (p=0.002). However, they do not differ according to major (p=0.0 97). These findings indicate that accounting students should be required to have a class in ethics as part of their college curriculum. Students with an ethics class have a significantly higher level of moral development. The challenges facing the profession at the current time indicate that public confidence in the reports of client corporations has eroded and one way to restore this confidence could be to require ethics training of future accountants.
Resumo:
The successful performance of a hydrological model is usually challenged by the quality of the sensitivity analysis, calibration and uncertainty analysis carried out in the modeling exercise and subsequent simulation results. This is especially important under changing climatic conditions where there are more uncertainties associated with climate models and downscaling processes that increase the complexities of the hydrological modeling system. In response to these challenges and to improve the performance of the hydrological models under changing climatic conditions, this research proposed five new methods for supporting hydrological modeling. First, a design of experiment aided sensitivity analysis and parameterization (DOE-SAP) method was proposed to investigate the significant parameters and provide more reliable sensitivity analysis for improving parameterization during hydrological modeling. The better calibration results along with the advanced sensitivity analysis for significant parameters and their interactions were achieved in the case study. Second, a comprehensive uncertainty evaluation scheme was developed to evaluate three uncertainty analysis methods, the sequential uncertainty fitting version 2 (SUFI-2), generalized likelihood uncertainty estimation (GLUE) and Parameter solution (ParaSol) methods. The results showed that the SUFI-2 performed better than the other two methods based on calibration and uncertainty analysis results. The proposed evaluation scheme demonstrated that it is capable of selecting the most suitable uncertainty method for case studies. Third, a novel sequential multi-criteria based calibration and uncertainty analysis (SMC-CUA) method was proposed to improve the efficiency of calibration and uncertainty analysis and control the phenomenon of equifinality. The results showed that the SMC-CUA method was able to provide better uncertainty analysis results with high computational efficiency compared to the SUFI-2 and GLUE methods and control parameter uncertainty and the equifinality effect without sacrificing simulation performance. Fourth, an innovative response based statistical evaluation method (RESEM) was proposed for estimating the uncertainty propagated effects and providing long-term prediction for hydrological responses under changing climatic conditions. By using RESEM, the uncertainty propagated from statistical downscaling to hydrological modeling can be evaluated. Fifth, an integrated simulation-based evaluation system for uncertainty propagation analysis (ISES-UPA) was proposed for investigating the effects and contributions of different uncertainty components to the total propagated uncertainty from statistical downscaling. Using ISES-UPA, the uncertainty from statistical downscaling, uncertainty from hydrological modeling, and the total uncertainty from two uncertainty sources can be compared and quantified. The feasibility of all the methods has been tested using hypothetical and real-world case studies. The proposed methods can also be integrated as a hydrological modeling system to better support hydrological studies under changing climatic conditions. The results from the proposed integrated hydrological modeling system can be used as scientific references for decision makers to reduce the potential risk of damages caused by extreme events for long-term water resource management and planning.
Resumo:
The exploration and development of oil and gas reserves located in harsh offshore environments are characterized with high risk. Some of these reserves would be uneconomical if produced using conventional drilling technology due to increased drilling problems and prolonged non-productive time. Seeking new ways to reduce drilling cost and minimize risks has led to the development of Managed Pressure Drilling techniques. Managed pressure drilling methods address the drawbacks of conventional overbalanced and underbalanced drilling techniques. As managed pressure drilling techniques are evolving, there are many unanswered questions related to safety and operating pressure regimes. Quantitative risk assessment techniques are often used to answer these questions. Quantitative risk assessment is conducted for the various stages of drilling operations – drilling ahead, tripping operation, casing and cementing. A diagnostic model for analyzing the rotating control device, the main component of managed pressure drilling techniques, is also studied. The logic concept of Noisy-OR is explored to capture the unique relationship between casing and cementing operations in leading to well integrity failure as well as its usage to model the critical components of constant bottom-hole pressure drilling technique of managed pressure drilling during tripping operation. Relevant safety functions and inherent safety principles are utilized to improve well integrity operations. Loss function modelling approach to enable dynamic consequence analysis is adopted to study blowout risk for real-time decision making. The aggregation of the blowout loss categories, comprising: production, asset, human health, environmental response and reputation losses leads to risk estimation using dynamically determined probability of occurrence. Lastly, various sub-models developed for the stages/sub-operations of drilling operations and the consequence modelling approach are integrated for a holistic risk analysis of drilling operations.
Resumo:
Funding: Verity Watson acknowledges financial support from the Chief Scientist Office of the Scottish Government Health and Social Care Directorates. The funders had no role in the study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication. Acknowledgements: We thank Marjon van der Pol, Mandy Ryan and Rainer Schulz for helpful comments and suggestions throughout the project. We also thank Karen Gerard and Tim Bolt for comparing the results of our systematic review with a similar systematic review they are conducting at the same time. We would like to thank Douglas Olley for excellent research assistance.
Resumo:
The importance of non-destructive techniques (NDT) in structural health monitoring programmes is being critically felt in the recent times. The quality of the measured data, often affected by various environmental conditions can be a guiding factor in terms usefulness and prediction efficiencies of the various detection and monitoring methods used in this regard. Often, a preprocessing of the acquired data in relation to the affecting environmental parameters can improve the information quality and lead towards a significantly more efficient and correct prediction process. The improvement can be directly related to the final decision making policy about a structure or a network of structures and is compatible with general probabilistic frameworks of such assessment and decision making programmes. This paper considers a preprocessing technique employed for an image analysis based structural health monitoring methodology to identify sub-marine pitting corrosion in the presence of variable luminosity, contrast and noise affecting the quality of images. A preprocessing of the gray-level threshold of the various images is observed to bring about a significant improvement in terms of damage detection as compared to an automatically computed gray-level threshold. The case dependent adjustments of the threshold enable to obtain the best possible information from an existing image. The corresponding improvements are observed in a qualitative manner in the present study.
Resumo:
The organisational decision making environment is complex, and decision makers must deal with uncertainty and ambiguity on a continuous basis. Managing and handling decision problems and implementing a solution, requires an understanding of the complexity of the decision domain to the point where the problem and its complexity, as well as the requirements for supporting decision makers, can be described. Research in the Decision Support Systems domain has been extensive over the last thirty years with an emphasis on the development of further technology and better applications on the one hand, and on the other hand, a social approach focusing on understanding what decision making is about and how developers and users should interact. This research project considers a combined approach that endeavours to understand the thinking behind managers’ decision making, as well as their informational and decisional guidance and decision support requirements. This research utilises a cognitive framework, developed in 1985 by Humphreys and Berkeley that juxtaposes the mental processes and ideas of decision problem definition and problem solution that are developed in tandem through cognitive refinement of the problem, based on the analysis and judgement of the decision maker. The framework facilitates the separation of what is essentially a continuous process, into five distinct levels of abstraction of manager’s thinking, and suggests a structure for the underlying cognitive activities. Alter (2004) argues that decision support provides a richer basis than decision support systems, in both practice and research. The constituent literature on decision support, especially in regard to modern high profile systems, including Business Intelligence and Business analytics, can give the impression that all ‘smart’ organisations utilise decision support and data analytics capabilities for all of their key decision making activities. However this empirical investigation indicates a very different reality.
Resumo:
Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.
For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.
Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.
Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.
In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.
For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.
Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.
Resumo:
BACKGROUND: Less than 1% of severely obese US adults undergo bariatric surgery annually. It is critical to understand the factors that contribute to its utilization. OBJECTIVES: To understand how primary care physicians (PCPs) make decisions regarding severe obesity treatment and bariatric surgery referral. SETTING: Focus groups with PCPs practicing in small, medium, and large cities in Wisconsin. METHODS: PCPs were asked to discuss prioritization of treatment for a severely obese patient with multiple co-morbidities and considerations regarding bariatric surgery referral. Focus group sessions were analyzed by using a directed approach to content analysis. A taxonomy of consensus codes was developed. Code summaries were created and representative quotes identified. RESULTS: Sixteen PCPs participated in 3 focus groups. Four treatment prioritization approaches were identified: (1) treat the disease that is easiest to address; (2) treat the disease that is perceived as the most dangerous; (3) let the patient set the agenda; and (4) address obesity first because it is the common denominator underlying other co-morbid conditions. Only the latter approach placed emphasis on obesity treatment. Five factors made PCPs hesitate to refer patients for bariatric surgery: (1) wanting to "do no harm"; (2) questioning the long-term effectiveness of bariatric surgery; (3) limited knowledge about bariatric surgery; (4) not wanting to recommend bariatric surgery too early; and (5) not knowing if insurance would cover bariatric surgery. CONCLUSION: Decision making by PCPs for severely obese patients seems to underprioritize obesity treatment and overestimate bariatric surgery risks. This could be addressed with PCP education and improvements in communication between PCPs and bariatric surgeons.
Resumo:
BACKGROUND: The American College of Cardiology guidelines recommend 3 months of anticoagulation after replacement of the aortic valve with a bioprosthesis. However, there remains great variability in the current clinical practice and conflicting results from clinical studies. To assist clinical decision making, we pooled the existing evidence to assess whether anticoagulation in the setting of a new bioprosthesis was associated with improved outcomes or greater risk of bleeding. METHODS AND RESULTS: We searched the PubMed database from the inception of these databases until April 2015 to identify original studies (observational studies or clinical trials) that assessed anticoagulation with warfarin in comparison with either aspirin or no antiplatelet or anticoagulant therapy. We included the studies if their outcomes included thromboembolism or stroke/transient ischemic attacks and bleeding events. Quality assessment was performed in accordance with the Newland Ottawa Scale, and random effects analysis was used to pool the data from the available studies. I(2) testing was done to assess the heterogeneity of the included studies. After screening through 170 articles, a total of 13 studies (cases=6431; controls=18210) were included in the final analyses. The use of warfarin was associated with a significantly increased risk of overall bleeding (odds ratio, 1.96; 95% confidence interval, 1.25-3.08; P<0.0001) or bleeding risk at 3 months (odds ratio, 1.92; 95% confidence interval, 1.10-3.34; P<0.0001) compared with aspirin or placebo. With regard to composite primary outcome variables (risk of venous thromboembolism, stroke, or transient ischemic attack) at 3 months, no significant difference was seen with warfarin (odds ratio, 1.13; 95% confidence interval, 0.82-1.56; P=0.67). Moreover, anticoagulation was also not shown to improve outcomes at time interval >3 months (odds ratio, 1.12; 95% confidence interval, 0.80-1.58; P=0.79). CONCLUSIONS: Contrary to the current guidelines, a meta-analysis of previous studies suggests that anticoagulation in the setting of an aortic bioprosthesis significantly increases bleeding risk without a favorable effect on thromboembolic events. Larger, randomized controlled studies should be performed to further guide this clinical practice.
Resumo:
Economic policy-making has long been more integrated than social policy-making in part because the statistics and much of the analysis that supports economic policy are based on a common conceptual framework – the system of national accounts. People interested in economic analysis and economic policy share a common language of communication, one that includes both concepts and numbers. This paper examines early attempts to develop a system of social statistics that would mirror the system of national accounts, particular the work on the development of social accounts that took place mainly in the 60s and 70s. It explores the reasons why these early initiatives failed but argues that the preconditions now exist to develop a new conceptual framework to support integrated social statistics – and hence a more coherent, effective social policy. Optimism is warranted for two reasons. First, we can make use of the radical transformation that has taken place in information technology both in processing data and in providing wide access to the knowledge that can flow from the data. Second, the conditions exist to begin to shift away from the straight jacket of government-centric social statistics, with its implicit assumption that governments must be the primary actors in finding solutions to social problems. By supporting the decision-making of all the players (particularly individual citizens) who affect social trends and outcomes, we can start to move beyond the sterile, ideological discussions that have dominated much social discourse in the past and begin to build social systems and structures that evolve, almost automatically, based on empirical evidence of ‘what works best for whom’. The paper describes a Canadian approach to developing a framework, or common language, to support the evolution of an integrated, citizen-centric system of social statistics and social analysis. This language supports the traditional social policy that we have today; nothing is lost. However, it also supports a quite different social policy world, one where individual citizens and families (not governments) are seen as the central players – a more empirically-driven world that we have referred to as the ‘enabling society’.
Resumo:
This qualitative study explores the barriers and dilemmas faced by beginning and novice mentors in post-compulsory education in the southeast of England. It analyses critical incidents (Tripp, 2012) taken from the everyday practice of mentors who were supporting new teachers and lecturers in the southeast of England. It categorises different types of critical incidents that mentors encountered and describes the strategies and rationales mentors used to support mentees and (indirectly) their learners and colleagues. The study explores ways in which mentors' own values, beliefs and life experiences affected their mentoring practice. Methodology As part of a specialist master’s-level professional development module, 21 mentors wrote about two critical incidents (Tripp, 2012) taken from their own professional experiences, which aimed to demonstrate their support for their mentee’s range of complex needs. These critical incidents were written up as short case studies, which justified the rationale for their interventions and demonstrated the mentors' own professional development in mentoring. Critical incidents were used as units of analysis and categorised thematically by topic, sector and mentoring strategies used. Findings The research demonstrated the complex nature of decision-making and the potential for professional learning within a mentoring dyad. The study of these critical incidents found that mentors most frequently cited the controversial nature of teaching observations, the mentor’s role in mediating professional relationships, the importance of inculcating professional dispositions in education, and the need to support new teachers so that they can use effective behaviour management strategies. This study contributes to our understanding of the central importance of mentoring for professional growth within teacher education. It identifies common dilemmas that novice mentors face in post-compulsory education, justifies the rationale for their interventions and mentoring strategies, and helps to identify ways in which mentors' professional development needs can be met. It demonstrates that mentoring is complex, non-linear and mediated by mentors’ motivation and values.