876 resultados para project planning and controlling
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Participation appeared in development discourses for the first time in the 1970s, as a generic call for the involvement of the poor in development initiatives. Over the last three decades, the initial perspectives on participation intended as a project method for poverty reduction have evolved into a coherent and articulated theoretical elaboration, in which participation figures among the paraphernalia of good governance promotion: participation has acquired the status of “new orthodoxy”. Nevertheless, the experience of the implementation of participatory approaches in development projects seemed to be in the majority of cases rather disappointing, since the transformative potential of ‘participation in development’ depends on a series of factors in which every project can actually differ from others: the ultimate aim of the approach promoted, its forms and contents and, last but not least, the socio-political context in which the participatory initiative is embedded. In Egypt, the signature of a project agreement between the Arab Republic of Egypt and the Federal Republic of Germany, in 1998, inaugurated a Participatory Urban Management Programme (PUMP) to be implemented in Greater Cairo by the German Technical Cooperation (Deutsche Gesellschaft für Technische Zusammenarbeit, GTZ) and the Ministry of Planning (now Ministry of Local Development) and the Governorates of Giza and Cairo as the main counterparts. Now, ten years after the beginning of the PUMP/PDP and close to its end (December 2010), it is possible to draw some conclusions about the scope, the significance and the effects of the participatory approach adopted by GTZ and appropriated by the Egyptian counterparts in dealing with the issue of informal areas and, more generally, of urban development. Our analysis follows three sets of questions: the first set regards the way ‘participation’ has been interpreted and concretised by PUMP and PDP. The second is about the emancipating potential of the ‘participatory approach’ and its ability to ‘empower’ the ‘marginalised’. The third focuses on one hand on the efficacy of GTZ strategy to lead to an improvement of the delivery service in informal areas (especially in terms of planning and policies), and on the other hand on the potential of GTZ development intervention to trigger an incremental process of ‘democratisation’ from below.
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Resumo:
Atmospheric aerosol particles directly impact air quality and participate in controlling the climate system. Organic Aerosol (OA) in general accounts for a large fraction (10–90%) of the global submicron (PM1) particulate mass. Chemometric methods for source identification are used in many disciplines, but methods relying on the analysis of NMR datasets are rarely used in atmospheric sciences. This thesis provides an original application of NMR-based chemometric methods to atmospheric OA source apportionment. The method was tested on chemical composition databases obtained from samples collected at different environments in Europe, hence exploring the impact of a great diversity of natural and anthropogenic sources. We focused on sources of water-soluble OA (WSOA), for which NMR analysis provides substantial advantages compared to alternative methods. Different factor analysis techniques are applied independently to NMR datasets from nine field campaigns of the project EUCAARI and allowed the identification of recurrent source contributions to WSOA in European background troposphere: 1) Marine SOA; 2) Aliphatic amines from ground sources (agricultural activities, etc.); 3) Biomass burning POA; 4) Biogenic SOA from terpene oxidation; 5) “Aged” SOAs, including humic-like substances (HULIS); 6) Other factors possibly including contributions from Primary Biological Aerosol Particles, and products of cooking activities. Biomass burning POA accounted for more than 50% of WSOC in winter months. Aged SOA associated with HULIS was predominant (> 75%) during the spring-summer, suggesting that secondary sources and transboundary transport become more important in spring and summer. Complex aerosol measurements carried out, involving several foreign research groups, provided the opportunity to compare source apportionment results obtained by NMR analysis with those provided by more widespread Aerodyne aerosol mass spectrometers (AMS) techniques that now provided categorization schemes of OA which are becoming a standard for atmospheric chemists. Results emerging from this thesis partly confirm AMS classification and partly challenge it.
Resumo:
The open clusters (OC) are gravitationally bound systems of a few tens or hundreds of stars. In our Galaxy, the Milky Way, we know about 3000 open clusters, of very different ages in the range of a few millions years to about 9 Gyr. OCs are mainly located in the Galactic thin disc, with distances from the Galactic centre in the range 4-22 kpc and a height scale on the disc of about 200 pc. Their chemical properties trace those of the environment in which they formed and the metallicity is in the range -0.5<[Fe/H]<+0.5 dex. Through photometry and spectroscopy it is possible to study relatively easily the properties of the OCs and estimate their age, distance, and chemistry. For these reasons they are considered primary tracers of the chemical properties and chemical evolution of the Galactic disc. The main subject of this thesis is the comprehensive study of several OCs. The research embraces two different projects: the Bologna Open Cluster Chemical Evolution project (BOCCE) and the Gaia-ESO Survey. The first is a long-term programme, aiming at studying the chemical evolution of the Milky Way disc by means of a homogeneous sample of OCs. The latter is a large public spectroscopy survey, conducted with the high-resolution spectrograph FLAMES@VLT and targeting about 10^5 stars in different part of the Galaxy and 10^4 stars in about 100 OCs. The common ground between the two projects is the study of the properties of the OCs as tracers of the disc's characteristics. The impressive scientific outcome of the Gaia-ESO Survey and the unique framework of homogeneity of the BOCCE project can propose, especially once combined together, a much more accurate description of the properties of the OCs. In turn, this will give fundamental constraints for the interpretation of the properties of the Galactic disc.
Resumo:
Landslide hazard and risk are growing as a consequence of climate change and demographic pressure. Land‐use planning represents a powerful tool to manage this socio‐economic problem and build sustainable and landslide resilient communities. Landslide inventory maps are a cornerstone of land‐use planning and, consequently, their quality assessment represents a burning issue. This work aimed to define the quality parameters of a landslide inventory and assess its spatial and temporal accuracy with regard to its possible applications to land‐use planning. In this sense, I proceeded according to a two‐steps approach. An overall assessment of the accuracy of data geographic positioning was performed on four case study sites located in the Italian Northern Apennines. The quantification of the overall spatial and temporal accuracy, instead, focused on the Dorgola Valley (Province of Reggio Emilia). The assessment of spatial accuracy involved a comparison between remotely sensed and field survey data, as well as an innovative fuzzylike analysis of a multi‐temporal landslide inventory map. Conversely, long‐ and short‐term landslide temporal persistence was appraised over a period of 60 years with the aid of 18 remotely sensed image sets. These results were eventually compared with the current Territorial Plan for Provincial Coordination (PTCP) of the Province of Reggio Emilia. The outcome of this work suggested that geomorphologically detected and mapped landslides are a significant approximation of a more complex reality. In order to convey to the end‐users this intrinsic uncertainty, a new form of cartographic representation is needed. In this sense, a fuzzy raster landslide map may be an option. With regard to land‐use planning, landslide inventory maps, if appropriately updated, confirmed to be essential decision‐support tools. This research, however, proved that their spatial and temporal uncertainty discourages any direct use as zoning maps, especially when zoning itself is associated to statutory or advisory regulations.
Resumo:
In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.
Resumo:
Understanding and controlling the mechanism of the diffusion of small molecules, macromolecules and nanoparticles in heterogeneous environments is of paramount fundamental and technological importance. The aim of the thesis is to show, how by studying the tracer diffusion in complex systems, one can obtain information about the tracer itself, and the system where the tracer is diffusing. rnIn the first part of my thesis I will introduce the Fluorescence Correlation Spectroscopy (FCS) which is a powerful tool to investigate the diffusion of fluorescent species in various environments. By using the main advantage of FCS namely the very small probing volume (<1µm3) I was able to track the kinetics of phase separation in polymer blends at late stages by looking on the molecular tracer diffusion in individual domains of the heterogeneous structure of the blend. The phase separation process at intermediate stages was monitored with laser scanning confocal microscopy (LSCM) in real time providing images of droplet coalescence and growth. rnIn a further project described in my thesis I will show that even when the length scale of the heterogeneities becomes smaller than the FCS probing volume one can still obtain important microscopic information by studying small tracer diffusion. To do so, I will introduce a system of star shaped polymer solutions and will demonstrate that the mobility of small molecular tracers on microscopic level is nearly not affected by the transition of the polymer system to a “glassy” macroscopic state. rnIn the last part of the thesis I will introduce and describe a new stimuli responsive system which I have developed, that combines two levels of nanoporosity. The system is based on poly-N-isopropylacrylamide (PNIPAM) and silica inverse opals (iOpals), and allows controlling the diffusion of tracer molecules. rn
Resumo:
Pollinating insects form a key component of European biodiversity, and provide a vital ecosystem service to crops and wild plants. There is growing evidence of declines in both wild and domesticated pollinators, and parallel declines in plants relying upon them. The STEP project (Status and Trends of European Pollinators, 2010-2015, www.step-project.net) is documenting critical elements in the nature and extent of these declines, examining key functional traits associated with pollination deficits, and developing a Red List for some European pollinator groups. Together these activities are laying the groundwork for future pollinator monitoring programmes. STEP is also assessing the relative importance of potential drivers of pollinator declines, including climate change, habitat loss and fragmentation, agrochemicals, pathogens, alien species, light pollution, and their interactions. We are measuring the ecological and economic impacts of declining pollinator services and floral resources, including effects on wild plant populations, crop production and human nutrition. STEP is reviewing existing and potential mitigation options, and providing novel tests of their effectiveness across Europe. Our work is building upon existing and newly developed datasets and models, complemented by spatially-replicated campaigns of field research to fill gaps in current knowledge. Findings are being integrated into a policy-relevant framework to create evidence-based decision support tools. STEP is establishing communication links to a wide range of stakeholders across Europe and beyond, including policy makers, beekeepers, farmers, academics and the general public. Taken together, the STEP research programme aims to improve our understanding of the nature, causes, consequences and potential mitigation of declines in pollination services at local, national, continental and global scales.
Resumo:
The Medicare Catastrophic Coverage Act (MCCA) would have mandated federal assistance for Medicare beneficiaries who have high annual prescription medication costs, High national expenditures for such drugs have encouraged the development of private and state insurance programs to help with these costs. Ten state pharmaceutical assistance programs (SPAPs), designed to help certain elderly, low income, or disabled people, exist for those ineligible for Medicaid or unable to purchase coverage privately. Coordination of state and federal benefits was a consideration for established programs, and programs being planned needed to determine the feasibity of integration of federal assistance. But the enactment and subsequent appeal of the Act affected both planning and policy implications for these SPAPs. All U.S. states and territories were surveyed before the bill's repeal to collect data on the effects of MCCA for those with prescription drug programs and those without. The repeal of the federal program places pressure on the nonprogram states to proceed, perhaps more cautiously, to initiate program; for their own residents, given increasing out-of-pocket and insurance costs, and no federal program.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
Indoor air pollution from combustion of solid fuels is the fifth leading contributor to disease burden in low-income countries. This, and potential to reduce environmental impacts, has resulted in emphasis on use of improved stoves. However, many efforts have failed to meet expectations and effective coverage remains limited. A disconnect exists between technologies, delivery methods, and long-term adoption. The purpose of this research is to develop a framework to increase long-term success of improved stove projects. The framework integrates sustainability factors into the project life-cycle. It is represented as a matrix and checklist which encourages consideration of social, economic, and environmental issues in projects. A case study was conducted in which an improved stove project in Honduras was evaluated using the framework. Results indicated areas of strength and weakness in project execution and highlighted potential improvements for future projects. The framework is also useful as a guide during project planning.
Resumo:
In the Dominican Republic economic growth in the past twenty years has not yielded sufficient improvement in access to drinking water services, especially in rural areas where 1.5 million people do not have access to an improved water source (WHO, 2006). Worldwide, strategic development planning in the rural water sector has focused on participatory processes and the use of demand filters to ensure that service levels match community commitment to post-project operation and maintenance. However studies have concluded that an alarmingly high percentage of drinking water systems (20-50%) do not provide service at the design levels and/or fail altogether (up to 90%): BNWP (2009), Annis (2006), and Reents (2003). World Bank, USAID, NGOs, and private consultants have invested significant resources in an effort to determine what components make up an “enabling environment” for sustainable community management of rural water systems (RWS). Research has identified an array of critical factors, internal and external to the community, which affect long term sustainability of water services. Different frameworks have been proposed in order to better understand the linkages between individual factors and sustainability of service. This research proposes a Sustainability Analysis Tool to evaluate the sustainability of RWS, adapted from previous relevant work in the field to reflect the realities in the Dominican Republic. It can be used as a diagnostic tool for government entities and development organizations to characterize the needs of specific communities and identify weaknesses in existing training regimes or support mechanisms. The framework utilizes eight indicators in three categories (Organization/Management, Financial Administration, and Technical Service). Nineteen independent variables are measured resulting in a score of sustainability likely (SL), possible (SP), or unlikely (SU) for each of the eight indicators. Thresholds are based upon benchmarks from the DR and around the world, primary data collected during the research, and the author’s 32 months of field experience. A final sustainability score is calculated using weighting factors for each indicator, derived from Lockwood (2003). The framework was tested using a statistically representative geographically stratified random sample of 61 water systems built in the DR by initiatives of the National Institute of Potable Water (INAPA) and Peace Corps. The results concluded that 23% of sample systems are likely to be sustainable in the long term, 59% are possibly sustainable, and for 18% it is unlikely that the community will be able to overcome any significant challenge. Communities that were scored as unlikely sustainable perform poorly in participation, financial durability, and governance while the highest scores were for system function and repair service. The Sustainability Analysis Tool results are verified by INAPA and PC reports, evaluations, and database information, as well as, field observations and primary data collected during the surveys. Future research will analyze the nature and magnitude of relationships between key factors and the sustainability score defined by the tool. Factors include: gender participation, legal status of water committees, plumber/operator remuneration, demand responsiveness, post construction support methodologies, and project design criteria.
Resumo:
Undergraduate education has a historical tradition of preparing students to meet the problem-solving challenges they will encounter in work, civic, and personal contexts. This thesis research was conducted to study the role of rhetoric in engineering problem solving and decision making and to pose pedagogical strategies for preparing undergraduate students for workplace problem solving. Exploratory interviews with engineering managers as well as the heuristic analyses of engineering A3 project planning reports suggest that Aristotelian rhetorical principles are critical to the engineer's success: Engineers must ascertain the rhetorical situation surrounding engineering problems; apply and adapt invention heuristics to conduct inquiry; draw from their investigation to find innovative solutions; and influence decision making by navigating workplace decision-making systems and audiences using rhetorically constructed discourse. To prepare undergraduates for workplace problem solving, university educators are challenged to help undergraduates understand the exigence and realize the kairotic potential inherent in rhetorical problem solving. This thesis offers pedagogical strategies that focus on mentoring learning communities in problem-posing experiences that are situated in many disciplinary, work, and civic contexts. Undergraduates build a flexible rhetorical technê for problem solving as they navigate the nuances of relevant problem-solving systems through the lens of rhetorical practice.