775 resultados para Payment-in-kind program
Resumo:
The ex ante quantification of impactsis compulsory when establishing a Rural Development Program (RDP) in the European Union. Thus, the purpose of this paper is to learn how to perform it better. In order to this all of the European 2007-2013 RDPs (a total of 88) and all of their corresponding available ex ante evaluations were analyzed.Results show that less than 50% of all RDPs quantify all the impact indicators and that the most used methodology that allows the quantification of all impact indicators is Input-Output. There are two main difficulties cited for not accomplishing the impact quantification: the heterogeneity of actors and factors involved in the program impacts and the lack of needed information.These difficulties should be addressedby usingnew methods that allow approaching the complexity of the programs and by implementing a better planning that facilitatesgathering the needed information.
Application of the agency theory for the analysis of performance-based mechanisms in road management
Resumo:
El WCTR es un congreso de reconocido prestigio internacional en el ámbito de la investigación del transporte, y aunque las actas publicadas están en formato digital y sin ISSN ni ISBN, lo consideramos lo suficientemente importante como para que se considere en los indicadores. This paper develops a model based on agency theory to analyze road management systems (under the different contract forms available today) that employ a mechanism of performance indicators to establish the payment of the agent. The base assumption is that of asymmetric information between the principal (Public Authorities) and the agent (contractor) and the risk aversion of this latter. It is assumed that the principal may only measure the agent?s performance indirectly and by means of certain performance indicators that may be verified by the authorities. In this model there is presumed to be a relation between the efforts made by the agent and the performance level measured by the corresponding indicators, though it is also considered that there may be dispersion between both variables that gives rise to a certain degree of randomness in the contract. An analysis of the optimal contract has been made on the basis of this model and in accordance with a series of parameters that characterize the economic environment and the particular conditions of road infrastructure. As a result of the analysis made, it is considered that an optimal contract should generally combine a fixed component and a payment in accordance with the performance level obtained. The higher the risk aversion of the agent and the greater the marginal cost of public funds, the lower the impact of this performance-based payment. By way of conclusion, the system of performance indicators should be as broad as possible but should not overweight those indicators that encompass greater randomness in their results.
Resumo:
The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.
Resumo:
Funding This work was supported by grants from the French Ministry of Research (PhD fellowship to CR), the University of Aberdeen (stipend to CR), the CNRS (PICS grant to BD), the L’Oréal Foundation-UNESCO “For Women in Science” program (fellowship to CR), the Région Rhône-Alpes (student mobility grant CMIRA Explora’doc to CR), the Rectors’ Conference of the Swiss Universities (mobility grant to CR), the Fédération de Recherche 41 BioEnvironnement et Santé (training grant to CR), and the Journal of Experimental Biology (travel grant to CR).
Resumo:
Applied colorimetry is an important module in the program of the elective subject "Colour Science: industrial applications”. This course is taught in the Optics and Optometry Degree and it has been used as a testing for the application of new teaching and assessment techniques consistent with the new European Higher Education Area. In particular, the main objective was to reduce the attendance to lessons and encourage the individual and collective work of students. The reason for this approach is based on the idea that students are able to work at their own learning pace. Within this dynamic work, we propose online lab practice based on Excel templates that our research group has developed ad-hoc for different aspects of colorimetry, such as conversion to different colour spaces, calculation of perceptual descriptors (hue, saturation, lightness), calculation of colour differences, colour matching dyes, etc. The practice presented in this paper is focused on the learning of colour differences. The session is based on a specific Excel template to compute the colour differences and to plot different graphs with these colour differences defined at different colour spaces: CIE ΔE, CIE ΔE94 and the CIELAB colour space. This template is implemented on a website what works by addressing the student work at a proper and organized way. The aim was to unify all the student work from a website, therefore the student is able to learn in an autonomous and sequential way and in his own pace. To achieve this purpose, all the tools, links and documents are collected for each different proposed activity to achieve guided specific objectives. In the context of educational innovation, this type of website is normally called WebQuest. The design of a WebQuest is established according to the criteria of usability and simplicity. There are great advantages of using WebQuests versus the toolbox “Campus Virtual” available in the University of Alicante. The Campus Virtual is an unfriendly environment for this specific purpose as the activities are organized in different sectors depending on whether the activity is a discussion, an activity, a self-assessment or the download of materials. With this separation, it is more difficult that the student follows an organized sequence. However, our WebQuest provides a more intuitive graphical environment, and besides, all the tasks and resources needed to complete them are grouped and organized according to a linear sequence. In this way, the student guided learning is optimized. Furthermore, with this simplification, the student focuses on learning and not to waste resources. Finally, this tool has a wide set of potential applications: online courses of colorimetry applied for postgraduate students, Open Course Ware, etc.
Resumo:
Bibliography: leaf 128.
Resumo:
Cover title.
Resumo:
Contract number 99-7-247-36-07.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
A brief narrative description of the journal article, document, or resource. There is limited information available related to the literacy skills of adults with intellectual disabilities. In this project, information was collected about the contexts, current practices, and clients' abilities in literacy in two community-based disability service programs. Individual assessments were undertaken to collect details of the current literacy levels of adults with intellectual disabilities in day program settings. These assessments focused on receptive language, reading at the letter, word and sentence level, writing vocabulary and connected text, and literacy preferences. Audits were also conducted related to the provision of opportunities for clients accessing these services to engage with literacy including environmental print. Structured day program activities were observed to gather information about current literacy teaching and learning. Implications of the research findings and suggestions for provision of literacy education in these settings are discusse
Resumo:
Objective: To adapt the Family Wellbeing empowerment program, which was initially designed to support adults to take greater control and responsibility for their decisions and lives, to the needs of Indigenous school children living in remote communities. Method. At the request of two schools in remote Indigenous communities in far north Queensland, a pilot personal development and empowerment program based on the adult Family Wellbeing principles was developed, conducted and evaluated in the schools. The main aims of the program were to build personal identity and to encourage students to recognise their future potential and be more aware of their place in the community and wider society. Results: Participation in the program resulted in significant social and emotional growth for the students. Outcomes described by participating students and teachers included increased analytical and reflective skills, greater ability to think for oneself and set goals, less teasing and bullying in the school environment, and an enhanced sense of identity, friendship and,social relatedness'. Conclusion: This pilot implementation of the Family Wellbeing Program adapted for schools demonstrated the program's potential to enhance Indigenous young people's personal growth and development. Challenges remain in increasing parental/ family involvement and ensuring the program's sustainability and transferability. The team has been working with relevant stakeholders to further develop and package the School-based Family Wellbeing program for Education Queensland's New Basics curriculum framework.
Resumo:
Objective: To assess the impact of structured diabetes care in a rural general practice. Design and setting: A cohort study of structured diabetes care (care plans, multidisciplinary involvement and regular patient recall) in a large general practice in a medium-sized Australian rural town. Medical care followed each doctor's usual practice. Participants: The first 404 consecutive patients with type 2 diabetes who consented to take part in the program were evaluated 24 months after enrolment in July 2002 to December 2003. Main outcome measures: Change in cardiovascular disease risk factors (waist circumference, body mass index, serum lipid levels, blood pressure); change in indicators of risks associated with poorly controlled diabetes (glycated haemoglobin [HbA1(c]) concentration, foot lesions, clinically significant hypoglycaemia); change in 5-year cardiovascular disease risk. Results: Women had a lower 5-year risk of a cardiovascular event at enrolment than men. Structured care was associated with statistically significant reductions in mean cardiovascular disease risk factors (waist circumference, -2.6 cm; blood pressure [systolic, -3 mmHg; diastolic -7 mmHg]; and serum lipid levels [total cholesterol, -0.5 mmol/L; HDL cholesterol, 0.02 mmol/L; LDL cholesterol, -0.4 mmol/L; triglycerides, -0.3 mmol/L]); and improvements in indicators of diabetic control (proportion with severe hypoglycaemic events, -2.2%; proportion with foot lesions, -14%). The greatest improvements in risk factors occurred in patients with the highest calculated cardiovascular risk. There was a statistically significant increase in the proportion of patients with ideal blood pressure (systolic,
Resumo:
The aim of this thesis is to critically examine drug prevention as a field of problematizations – how drug prevention becomes established as a political technology within this field, how it connects to certain modes of governance, how and under which conditions it constitutes it’s problematic, the questions it asks, it´s implications in terms of political participation and representation, the various bodies of knowledge through which it constitutes the reality upon which it acts, the limits it places on ways of being, questioning, and talking in the world. The main analyses have been conducted in four separate but interrelated articles. Each article addresses a specific dimension of drug prevention in order to get a grasp of how this field is organized. Article 1 examines the shift that has occurred in the Swedish context during the period 1981–2011 in how drugs have been problematized, what knowledge has grounded the specific modes of problematization and which modes of governance this has enabled. In article 2, the currently dominant scientific discipline in the field of drug prevention – prevention science – is critically examined in terms of how it constructs the “drug problem” and the underlying assumptions it carries in regard to reality and political governance. Article 3 addresses the issue of communities’ democratic participation in drug prevention efforts by analyzing the theoretical foundations of the Communities That Care prevention program. The article seeks to uncover how notions of community empowerment and democratic participation are constructed, and how the “community” is established as a political entity in the program. The fourth and final article critically examines the Swedish Social and Emotional Training (SET) program and the political implications of the relationship the program establishes between the subject and emotions. The argument is made that, within the field of drug prevention, questions of political values and priorities in a problematic way are decoupled from the political field and pose a significant problem in terms of the possibilities to engage in democratic deliberation. Within this field of problematizations it becomes impossible to mobilize a politics against social injustice, poverty and inequality. At the same time, the scientific grounding of this mode of governing the drug “problem” acts to naturalize a specific – highly political – way of engaging with drugs.
Resumo:
Previous studies into student volunteering have shown how formally organized volunteering activities have social, economic and practical benefits for student volunteers and the recipients of their volunteerism (Egerton, 2002; Vernon & Foster, 2002); moreover student volunteering provides the means by which undergraduates are able to acquire and hone transferable skills sought by employers following graduation (Eldridge & Wilson, 2003; Norris et al, 2006). Within the UK Higher Education Sector, a popular mechanism for accessing volunteering is through formally organized student mentoring programmes whereby more ‘senior’ students volunteer to mentor less experienced undergraduates through a particular phase of their academic careers, including the transition from school or college to university. The value of student mentoring as a pedagogical tool within Higher Education is reflected in the literature (see for example, Bargh & Schul, 1980, Hartman,1990, Woodd, 1997). However, from a volunteering perspective, one of the key issues relates to the generally accepted conceptualisation of volunteering as a formally organized activity, that is un-coerced and for which there is no payment (Davis Smith, 1992, 1998; Sheard, 1995). Although the majority of student mentoring programs discussed in the paper are unpaid and voluntary in nature, in a small number of institutions some of the mentoring programs offered to students provide a minimum wage for mentors. From an ethical perspective, such payments may cause difficulties when considering potential mentors’ motivations and reasons for participating in the program. Additionally, institutions usually only have one or two paid mentoring programs running alongside several voluntary programmes – sometimes resulting in an over-subscription for places as paid mentors to the detriment of unpaid programs. Furthermore, from an institutional perspective, student mentoring presents a set of particular ethical problems reflecting issues around ‘matching’ mentors and mentees in terms of gender, race, ethnicity and religion. This is found to be the case in some ‘targeted’ mentoring programs whereby a particular demographic group of students are offered access to mentoring in an attempt to improve their chances of academic success. This paper provides a comparative analysis of the experiences and perceptions of mentors and mentees participating in a wide-range of different mentoring programs. It also analyzes the institutional challenges and benefits associated with managing large scale student volunteering programs. In doing so the paper adds to third sector literature by critiquing the distinctive issues surrounding student volunteering and by discussing, in-depth, the management of large groups of student volunteers. From a public policy perspective, the economic, educational, vocational and social outcomes of student volunteering make this an important subject meriting investigation. Little is known about the mentoring experiences of student volunteers with regards to the ‘added value’ of participating in campus-based volunteering activities. Furthermore, in light of the current economic downturn, by drawing attention to the contribution that student volunteering plays in equipping undergraduates with transferable ‘employability’ related skills and competencies (Andrews & Higson, 2008), this paper makes an important contribution to current educational and political debates. In addition to providing the opportunity for students to acquire key transferable skills, the findings suggest that mentoring encourages students to volunteer in other areas of university and community life. The paper concludes by arguing that student mentoring provides a valuable learning experience for student volunteer mentors and for the student and pupil mentees with whom they are placed.
Resumo:
The Multicultural Communication Bridge Program, an ongoing project at the Broward Correctional Institution, utilizes creative movement, writing, and drawing as treatment modalities with long-term incarcerated women. This type of programming is new in the prison system thus literature and research supporting the outcomes with this population are lacking. Therefore, a qualitative study was conducted to determine the efficacy of the program. Nine inmates, who have been involved in the program for at least one year, were interviewed to gather information about their personal experiences as a result of their participation. Common themes that were noted include an increase in trust, the expression of emotions, an increase in self esteem, and an improvement in interactions with others. These attributes are believed to be beneficial to these women to ensure a successful community reintegration upon their release from prison. ^