768 resultados para reliability-cost evaluation
Resumo:
Background
Increasing physical activity in the workplace can provide employee physical and mental health benefits, and employer economic benefits through reduced absenteeism and increased productivity. The workplace is an opportune setting to encourage habitual activity. However, there is limited evidence on effective behaviour change interventions that lead to maintained physical activity. This study aims to address this gap and help build the necessary evidence base for effective, and cost-effective, workplace interventions
Methods/design
This cluster randomised control trial will recruit 776 office-based employees from public sector organisations in Belfast and Lisburn city centres, Northern Ireland. Participants will be randomly allocated by cluster to either the Intervention Group or Control Group (waiting list control). The 6-month intervention consists of rewards (retail vouchers, based on similar principles to high street loyalty cards), feedback and other evidence-based behaviour change techniques. Sensors situated in the vicinity of participating workplaces will promote and monitor minutes of physical activity undertaken by participants. Both groups will complete all outcome measures. The primary outcome is steps per day recorded using a pedometer (Yamax Digiwalker CW-701) for 7 consecutive days at baseline, 6, 12 and 18 months. Secondary outcomes include health, mental wellbeing, quality of life, work absenteeism and presenteeism, and use of healthcare resources. Process measures will assess intervention “dose”, website usage, and intervention fidelity. An economic evaluation will be conducted from the National Health Service, employer and retailer perspective using both a cost-utility and cost-effectiveness framework. The inclusion of a discrete choice experiment will further generate values for a cost-benefit analysis. Participant focus groups will explore who the intervention worked for and why, and interviews with retailers will elucidate their views on the sustainability of a public health focused loyalty card scheme.
Discussion
The study is designed to maximise the potential for roll-out in similar settings, by engaging the public sector and business community in designing and delivering the intervention. We have developed a sustainable business model using a ‘points’ based loyalty platform, whereby local businesses ‘sponsor’ the incentive (retail vouchers) in return for increased footfall to their business.
Resumo:
Background: Sepsis can lead to multiple organ failure and death. Timely and appropriate treatment can reduce in-hospital mortality and morbidity. Objectives: To determine the clinical effectiveness and cost-effectiveness of three tests [LightCycler SeptiFast Test MGRADE® (Roche Diagnostics, Risch-Rotkreuz, Switzerland); SepsiTest™ (Molzym Molecular Diagnostics, Bremen, Germany); and the IRIDICA BAC BSI assay (Abbott Diagnostics, Lake Forest, IL, USA)] for the rapid identification of bloodstream bacteria and fungi in patients with suspected sepsis compared with standard practice (blood culture with or without matrix-absorbed laser desorption/ionisation time-offlight mass spectrometry). Data sources: Thirteen electronic databases (including MEDLINE, EMBASE and The Cochrane Library) were searched from January 2006 to May 2015 and supplemented by hand-searching relevant articles. Review methods: A systematic review and meta-analysis of effectiveness studies were conducted. A review of published economic analyses was undertaken and a de novo health economic model was constructed. A decision tree was used to estimate the costs and quality-adjusted life-years (QALYs) associated with each test; all other parameters were estimated from published sources. The model was populated with evidence from the systematic review or individual studies, if this was considered more appropriate (base case 1). In a secondary analysis, estimates (based on experience and opinion) from seven clinicians regarding the benefits of earlier test results were sought (base case 2). A NHS and Personal Social Services perspective was taken, and costs and benefits were discounted at 3.5% per annum. Scenario analyses were used to assess uncertainty. Results: For the review of diagnostic test accuracy, 62 studies of varying methodological quality were included. A meta-analysis of 54 studies comparing SeptiFast with blood culture found that SeptiFast had an estimated summary specificity of 0.86 [95% credible interval (CrI) 0.84 to 0.89] and sensitivity of 0.65 (95% CrI 0.60 to 0.71). Four studies comparing SepsiTest with blood culture found that SepsiTest had an estimated summary specificity of 0.86 (95% CrI 0.78 to 0.92) and sensitivity of 0.48 (95% CrI 0.21 to 0.74), and four studies comparing IRIDICA with blood culture found that IRIDICA had an estimated summary specificity of 0.84 (95% CrI 0.71 to 0.92) and sensitivity of 0.81 (95% CrI 0.69 to 0.90). Owing to the deficiencies in study quality for all interventions, diagnostic accuracy data should be treated with caution. No randomised clinical trial evidence was identified that indicated that any of the tests significantly improved key patient outcomes, such as mortality or duration in an intensive care unit or hospital. Base case 1 estimated that none of the three tests provided a benefit to patients compared with standard practice and thus all tests were dominated. In contrast, in base case 2 it was estimated that all cost per QALY-gained values were below £20,000; the IRIDICA BAC BSI assay had the highest estimated incremental net benefit, but results from base case 2 should be treated with caution as these are not evidence based. Limitations: Robust data to accurately assess the clinical effectiveness and cost-effectiveness of the interventions are currently unavailable. Conclusions: The clinical effectiveness and cost-effectiveness of the interventions cannot be reliably determined with the current evidence base. Appropriate studies, which allow information from the tests to be implemented in clinical practice, are required.
Resumo:
The goal of this research was to evaluate the needs of the intercity common carrier bus service in Iowa. Within the framework of the overall goal, the objectives were to: (1) Examine the detailed operating cost and revenue data of the intercity carriers in Iowa; (2) Develop a model or models to estimate demand in cities and corridors served by the bus industry; (3) Develop a cost function model for estimating a carrier's operating costs; (4) Establish the criteria to be used in assessing the need for changes in bus service; (5) Outline the procedures for estimating route operating costs and revenues and develop a matrix of community and social factors to be considered in evaluation; and (6) Present a case study to demonstrate the methodology. The results of the research are presented in the following chapters: (1) Introduction; (2) Intercity Bus Research and Development; (3) Operating Characteristics of Intercity Carriers in Iowa; (4) Commuter Carriers; (5) Passenger and Revenue Forecasting Models; (6) Operating Cost Relationships; (7) Social and General Welfare Aspects of Intercity Bus Service; (8) Case Study Analysis; and (9) Additional Service Considerations and Recommendations.
Resumo:
The objective of the evaluation of the weather forecasting services used by the Iowa Department of Transportation is to ascertain the accuracy of the forecasts given to maintenance personnel and to determine whether the forecasts are useful in the decision-making process and whether the forecasts have potential for improving the level of service. The Iowa Department of Transportation has estimated the average cost of fighting a winter storm to be about $60,000 to $70,000 per hour. This final report is to provide an evaluation report describing the collection of weather data and information associated with the weather forecasting services provided to the Iowa Department of Transportation and its maintenance activities and to determine their impact in winter maintenance decision-making.
Resumo:
Surface flow types (SFT) are advocated as ecologically relevant hydraulic units, often mapped visually from the bankside to characterise rapidly the physical habitat of rivers. SFT mapping is simple, non-invasive and cost-efficient. However, it is also qualitative, subjective and plagued by difficulties in recording accurately the spatial extent of SFT units. Quantitative validation of the underlying physical habitat parameters is often lacking, and does not consistently differentiate between SFTs. Here, we investigate explicitly the accuracy, reliability and statistical separability of traditionally mapped SFTs as indicators of physical habitat, using independent, hydraulic and topographic data collected during three surveys of a c. 50m reach of the River Arrow, Warwickshire, England. We also explore the potential of a novel remote sensing approach, comprising a small unmanned aerial system (sUAS) and Structure-from-Motion photogrammetry (SfM), as an alternative method of physical habitat characterisation. Our key findings indicate that SFT mapping accuracy is highly variable, with overall mapping accuracy not exceeding 74%. Results from analysis of similarity (ANOSIM) tests found that strong differences did not exist between all SFT pairs. This leads us to question the suitability of SFTs for characterising physical habitat for river science and management applications. In contrast, the sUAS-SfM approach provided high resolution, spatially continuous, spatially explicit, quantitative measurements of water depth and point cloud roughness at the microscale (spatial scales ≤1m). Such data are acquired rapidly, inexpensively, and provide new opportunities for examining the heterogeneity of physical habitat over a range of spatial and temporal scales. Whilst continued refinement of the sUAS-SfM approach is required, we propose that this method offers an opportunity to move away from broad, mesoscale classifications of physical habitat (spatial scales 10-100m), and towards continuous, quantitative measurements of the continuum of hydraulic and geomorphic conditions which actually exists at the microscale.
Resumo:
Background: Primary total knee replacement is a common operation that is performed to provide pain relief and restore functional ability. Inpatient physiotherapy is routinely provided after surgery to enhance recovery prior to hospital discharge. However, international variation exists in the provision of outpatient physiotherapy after hospital discharge. While evidence indicates that outpatient physiotherapy can improve short-term function, the longer term benefits are unknown. The aim of this randomised controlled trial is to evaluate the long-term clinical effectiveness and cost-effectiveness of a 6-week group-based outpatient physiotherapy intervention following knee replacement. Methods/design: Two hundred and fifty-six patients waiting for knee replacement because of osteoarthritis will be recruited from two orthopaedic centres. Participants randomised to the usual-care group (n = 128) will be given a booklet about exercise and referred for physiotherapy if deemed appropriate by the clinical care team. The intervention group (n = 128) will receive the same usual care and additionally be invited to attend a group-based outpatient physiotherapy class starting 6 weeks after surgery. The 1-hour class will be run on a weekly basis over 6 weeks and will involve task-orientated and individualised exercises. The primary outcome will be the Lower Extremity Functional Scale at 12 months post-operative. Secondary outcomes include: quality of life, knee pain and function, depression, anxiety and satisfaction. Data collection will be by questionnaire prior to surgery and 3, 6 and 12 months after surgery and will include a resource-use questionnaire to enable a trial-based economic evaluation. Trial participation and satisfaction with the classes will be evaluated through structured telephone interviews. The primary statistical and economic analyses will be conducted on an intention-to-treat basis with and without imputation of missing data. The primary economic result will estimate the incremental cost per quality-adjusted life year gained from this intervention from a National Health Services (NHS) and personal social services perspective. Discussion: This research aims to benefit patients and the NHS by providing evidence on the long-term effectiveness and cost-effectiveness of outpatient physiotherapy after knee replacement. If the intervention is found to be effective and cost-effective, implementation into clinical practice could lead to improvement in patients’ outcomes and improved health care resource efficiency.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-07
Resumo:
En este trabajo se presenta la descripción e investigación en la evaluación de vetiver (Chrysopogon zizanioides) y la elefanta (Pennisetum purpureum) en el diseño de humedales artificiales. Para el tratamiento de aguas residuales de origen doméstico, siendo la vegetación uno de los principales componentes de estos sistemas de tratamientos no convencionales. Muchos \sistemas naturales" están siendo considerados con el propósito del tratamiento del agua residual y control de la contaminación del agua, debido a su alta fiabilidad ambiental y los bajos costos de construcción y mantenimiento, es el caso de los humedales artificiales. El interés en los sistemas naturales está basado en la conservación de los recursos asociados con estos sistemas como opuesto al proceso de tratamiento convencional de aguas residuales que es intensivo respecto al uso de energía y químicos. Los wetlands o humedales artificiales constituyen una alternativa de tratamiento debido a su alta eficiencia de remoción de contaminantes, a su bajo costo de instalación y mantenimiento y a su alta fiabilidad ambiental, generalmente un humedal artificial esta constituido por un medio de soporte el cual generalmente es arena o grava, vegetación y microorganismos o biopelícula los cuales llevan los diferentes procesos bioquímicos para remover los contaminantes del afluente. El objetivo general de este trabajo ha sido: Evaluar la eficiencia de remoción de materia orgánica, sólidos, nitrógeno y fósforo total de dos especies de plantas: vetiver (Chrysopogon zizanioides) y la elefanta (Pennisetum purpureum), en el diseño de humedales artificiales para el tratamiento de aguas residuales de origen doméstico. Los humedales artificiales o sistemas pilotos, se encuentran ubicados en la universidad de Medellín y reciben una preparación de agua sintética, que asemeja a las características de un agua residual de origen doméstico. En el presente trabajo se evalúa el porcentaje de remoción de la carga orgánica de aguas residuales, en un sistema de tratamiento por humedales artificiales con dos especies vegetales. El sistema fue diseñado con tres módulos instalados de manera adjunta. En el primero no se integra ninguna especie vegetal, solo el medio de sustrato el cual constituye el blanco (-), en el segundo se integraron organismos de la especie vetiver (Chrysopogon zizanioides), en el tercer sistema piloto, organismos de la especie elefanta (Pennisetum purpureum) y en el cuarto organismos de la especie papiro japones (Cyperus alternifolius), los cuales constituyen el control positivo (+). Los módulos experimentales fueron limpiados, cortados y adecuados acorde al montaje inicial de las plantas y al espacio requerido para su disposición. A cada sistema piloto se le agrega medio de soporte constituido por grava (5 a 10 cm) y arena (15 a 20 cm), el sustrato es evaluado y caracterizado por su diámetro nominal, posterior en cada sistema se siembran las especies en un área de 3x3 y cada humedal por dos semanas se adecua bajo la solución de Hoagland y Arnon y régimen de humedad. En el agua sintética se analizaron los siguientes parámetros: pH, sólidos totales, sólidos suspendido totales, sólidos disueltos totales, demanda química de oxígeno (DQO), demanda bioquímica de oxígeno (DBO5), nitrógeno total (NTK) y fosforo total (PT). También se realizó la determinación del crecimiento de las plantas a partir del incremento de biomasa, porosidad de la raíz y de igual forma se determina NTK y PT. Los resultados demostraron que el sistema es una opción para la remoción de la carga orgánica y de nutrientes en aguas residuales de origen doméstico, de bajo costo de operación y mantenimiento, especialmente se observa que las plantas que crecen en sistemas de régimen de humedad ácuico y ústico, tienden a tener una mayor recepción y adaptación en los humedales artificiales pilotos, es el caso de la elefanta (Pennisetum purpureum), el cual presenta las más altas tasas de remoción de contaminantes y nutrientes en el afluente, seguido por el papiro japonés (Cyperus alternifolius) y el vetiver (Chrysopogon zizanioides), respecto a tasas de remoción. La remoción de contaminantes que se presentan más altos respectivamente, constituyen sólidos en primera instancia, seguido por la demanda bioquímica de oxigeno (DBO5), demanda química de oxígeno (DQO), nitrógeno total (NTK) y fósforo total (PT), estos últimos presentan una baja tasa de remoción, debido a la naturaleza misma del contaminante, a los organismos que realizan la remoción y absorción y al tiempo de retención que se elige, el cual influye en la tasa de remoción del contaminante siendo menor en la concentración de fósforo, pero se encuentranen el rango esperado para estos sistemas de tratamiento no convencionales.
Resumo:
Les déficits cognitifs sont présents chez les patients atteints de cancer. Les tests cognitifs tels que le Montreal Cognitive Assessment se sont révélés peu spécifiques, incapables de détecter des déficits légers et ne sont pas linéaires. Pour suppléer à ces limitations nous avons développé un questionnaire cognitif simple, bref et adapté aux dimensions cognitives atteintes chez les patients avec un cancer, le FaCE « The Fast Cognitif Evaluation », en utilisant la modélisation Rasch (MR). La MR est une méthode mathématique probabiliste qui détermine les conditions pour qu’un outil soit considéré une échelle de mesure et elle est indépendante de l’échantillon. Si les résultats s’ajustent au modèle, l’échelle de mesure est linéaire avec des intervalles égaux. Les réponses sont basées sur la capacité des sujets et la difficulté des items. La carte des items permet de sélectionner les items les plus adaptés pour l’évaluation de chaque aspect cognitif et d’en réduire le nombre au minimum. L’analyse de l’unidimensionnalité évalue si l’outil mesure une autre dimension que celle attendue. Les résultats d’analyses, conduites sur 165 patients, montrent que le FaCE distingue avec une excellente fiabilité et des niveaux suffisamment différents les compétences des patients (person-reliability-index=0.86; person-separation-index=2.51). La taille de la population et le nombre d’items sont suffisants pour que les items aient une hiérarchisation fiable et précise (item-reliability=0.99; item-séparation-index=8.75). La carte des items montre une bonne dispersion de ceux-ci et une linéarité du score sans effet plafond. Enfin, l’unidimensionnalité est respectée et le temps d’accomplissement moyen est d’environ 6 minutes. Par définition la MR permet d’assurer la linéarité et la continuité de l’échelle de mesure. Nous avons réussi à développer un questionnaire bref, simple, rapide et adapté aux déficits cognitifs des patients avec un cancer. Le FaCE pourrait, aussi, servir de mesure de référence pour les futures recherches dans le domaine.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
Groundnut cake (GNC) meal is an important source of dietary protein for domestic animals with a cost advantage over the conventional animal protein sources used in aquaculture feed production. It would be useful to evaluate the effects of GNC processing methods on the density and nutritional values of processed GNC meals. The use of processed GNC meals in the diets of Clarias gariepinus fingerlings was evaluated. Seven iso-proteic and iso-caloric diets were formulated, replacing fish meal with roasted and boiled GNC meals, each at three inclusion levels of 30%, 35%, and 40%. Diet I is 100% fishmeal, Diet II is 30% roasted GNC meal, Diet III is 35% roasted GNC meal, Diet IV is 40% roasted GNC meal, Diet V is 30% boiled GNC meal, Diet VI is 35% boiled GNC meal and Diet VII is 40% boiled GNC meal. Results showed that the crude protein content of GNC meals was 40.5% and 40.8% in boiled and roasted GNC meals respectively; the lower protein content for processed GNC meals might be due to heat denaturation of the seed protein, with boiled GNC meal being more adversely affected. The mean weight gain of fingerlings fed roasted GNC meals ranged between 5.29 – 5.64 while for boiled GNC meals, it was between 4.60 – 5.22. Generally, fish performed better when fed diets containing roasted GNC meals, than boiled GNC meals, and compared favorably with fish fed fish meal based diet. Body mass increase, total feed increase, protein efficiency ratio and specific growth rate by C. gariepinus fingerlings in all diets, showed no significant differences, suggesting that processed GNC meals could partially replace diets for C. gariepinus fingerlings without adverse consequences. This study showed that processed GNC meals could partially replace fish meal up to 30% without significantly influencing fingerling growth and health. It is recommended that the use of fish meal as the main basal ingredient for fingerlings could be discontinued, since GNC meal was a cheaper alternative, and could replace fish meal up to 35%, without any significant adverse effects on the fingerling performance. KEYWORDS: Clarias gariepinus, Fingerlings, Groundnut cake meal, Nutrient utilization, Performance.
Resumo:
The recently developed network-wide real-time signal control strategy TUC has been implemented in three traffic networks with quite different traffic and control infrastructure characteristics: Chania, Greece (23 junctions); Southampton, UK (53 junctions); and Munich, Germany (25 junctions), where it has been compared to the respective resident real-time signal control strategies TASS, SCOOT and BALANCE. After a short outline of TUC, the paper describes the three application networks; the application, demonstration and evaluation conditions; as well as the comparative evaluation results. The main conclusions drawn from this high-effort inter-European undertaking is that TUC is an easy-to-implement, inter-operable, low-cost real-time signal control strategy whose performance, after very limited fine-tuning, proved to be better or, at least, similar to the ones achieved by long-standing strategies that were in most cases very well fine-tuned over the years in the specific networks.
Resumo:
This paper describes how the recently developed network-wide real-time signal control strategy TUC has been implemented in three traffic networks with quite different traffic and control infrastructure characteristics: Chania, Greece (23 junctions); Southampton, U.K. (53 junctions); and Munich, Germany (25 junctions), where it has been compared to the respective resident real-time signal control strategies TASS, SCOOT and BALANCE. After a short outline of TUC, the paper describes the three application networks; the application, demonstration and evaluation conditions; as well as the comparative evaluation results. The main conclusions drawn from this high-effort inter-European undertaking is that TUC is an easy-to-implement, inter-operable, low-cost real-time signal control strategy whose performance, after limited fine-tuning, proved to be better or, at least, similar to the ones achieved by long-standing strategies that were in most cases very well fine-tuned over the years in the specific networks.
Resumo:
The PhD project addresses the potential of using concentrating solar power (CSP) plants as a viable alternative energy producing system in Libya. Exergetic, energetic, economic and environmental analyses are carried out for a particular type of CSP plants. The study, although it aims a particular type of CSP plant – 50 MW parabolic trough-CSP plant, it is sufficiently general to be applied to other configurations. The novelty of the study, in addition to modeling and analyzing the selected configuration, lies in the use of a state-of-the-art exergetic analysis combined with the Life Cycle Assessment (LCA). The modeling and simulation of the plant is carried out in chapter three and they are conducted into two parts, namely: power cycle and solar field. The computer model developed for the analysis of the plant is based on algebraic equations describing the power cycle and the solar field. The model was solved using the Engineering Equation Solver (EES) software; and is designed to define the properties at each state point of the plant and then, sequentially, to determine energy, efficiency and irreversibility for each component. The developed model has the potential of using in the preliminary design of CSPs and, in particular, for the configuration of the solar field based on existing commercial plants. Moreover, it has the ability of analyzing the energetic, economic and environmental feasibility of using CSPs in different regions of the world, which is illustrated for the Libyan region in this study. The overall feasibility scenario is completed through an hourly analysis on an annual basis in chapter Four. This analysis allows the comparison of different systems and, eventually, a particular selection, and it includes both the economic and energetic components using the “greenius” software. The analysis also examined the impact of project financing and incentives on the cost of energy. The main technological finding of this analysis is higher performance and lower levelized cost of electricity (LCE) for Libya as compared to Southern Europe (Spain). Therefore, Libya has the potential of becoming attractive for the establishment of CSPs in its territory and, in this way, to facilitate the target of several European initiatives that aim to import electricity generated by renewable sources from North African and Middle East countries. The analysis is presented a brief review of the current cost of energy and the potential of reducing the cost from parabolic trough- CSP plant. Exergetic and environmental life cycle assessment analyses are conducted for the selected plant in chapter Five; the objectives are 1) to assess the environmental impact and cost, in terms of exergy of the life cycle of the plant; 2) to find out the points of weakness in terms of irreversibility of the process; and 3) to verify whether solar power plants can reduce environmental impact and the cost of electricity generation by comparing them with fossil fuel plants, in particular, Natural Gas Combined Cycle (NGCC) plant and oil thermal power plant. The analysis also targets a thermoeconomic analysis using the specific exergy costing (SPECO) method to evaluate the level of the cost caused by exergy destruction. The main technological findings are that the most important contribution impact lies with the solar field, which reports a value of 79%; and the materials with the vi highest impact are: steel (47%), molten salt (25%) and synthetic oil (21%). The “Human Health” damage category presents the highest impact (69%) followed by the “Resource” damage category (24%). In addition, the highest exergy demand is linked to the steel (47%); and there is a considerable exergetic demand related to the molten salt and synthetic oil with values of 25% and 19%, respectively. Finally, in the comparison with fossil fuel power plants (NGCC and Oil), the CSP plant presents the lowest environmental impact, while the worst environmental performance is reported to the oil power plant followed by NGCC plant. The solar field presents the largest value of cost rate, where the boiler is a component with the highest cost rate among the power cycle components. The thermal storage allows the CSP plants to overcome solar irradiation transients, to respond to electricity demand independent of weather conditions, and to extend electricity production beyond the availability of daylight. Numerical analysis of the thermal transient response of a thermocline storage tank is carried out for the charging phase. The system of equations describing the numerical model is solved by using time-implicit and space-backward finite differences and which encoded within the Matlab environment. The analysis presented the following findings: the predictions agree well with the experiments for the time evolution of the thermocline region, particularly for the regions away from the top-inlet. The deviations observed in the near-region of the inlet are most likely due to the high-level of turbulence in this region due to the localized level of mixing resulting; a simple analytical model to take into consideration this increased turbulence level was developed and it leads to some improvement of the predictions; this approach requires practically no additional computational effort and it relates the effective thermal diffusivity to the mean effective velocity of the fluid at each particular height of the system. Altogether the study indicates that the selected parabolic trough-CSP plant has the edge over alternative competing technologies for locations where DNI is high and where land usage is not an issue, such as the shoreline of Libya.