767 resultados para cost utility analysis


Relevância:

40.00% 40.00%

Publicador:

Resumo:

We provide the first exploration of thallium (Tl) abundances and stable isotope compositions as potential tracers during arc lava genesis. We present a case study of lavas from the Central Island Province (CIP) of the Mariana arc, supplemented by representative sedimentary and altered oceanic crust (AOC) inputs from ODP Leg 129 Hole 801 outboard of the Mariana trench. Given the large Tl concentration contrast between the mantle and subduction inputs coupled with previously published distinctive Tl isotope signatures of sediment and AOC, the Tl isotope system has great potential to distinguish different inputs to arc lavas. Furthermore, CIP lavas have well-established inter island variability, providing excellent context for the examination of Tl as a new stable isotope tracer. In contrast to previous work (Nielsen et al., 2006b), we do not observe Tl enrichment or light epsilon 205Tl (where epsilon 205Tl is the deviation in parts per 10,000 of a sample 205Tl/203Tl ratio compared to NIST SRM 997 Tl standard) in the Jurassic-aged altered mafic ocean crust subducting outboard of the Marianas (epsilon 205Tl = - 4.4 to 0). The lack of a distinctive epsilon 205Tl signature may be related to secular changes in ocean chemistry. Sediments representative of the major lithologies from ODP Hole Leg 129 801 have 1-2 orders of magnitude of Tl enrichment compared to the CIP lavas, but do not record heavy signatures (epsilon 205Tl = - 3.0 to + 0.4), as previously found in similar sediment types (epsilon 205Tl > + 2.5; Rehkämper et al., 2004). We find a restricted range of epsilon 205Tl = - 1.8 to - 0.4 in CIP lavas, which overlaps with MORB. One lava from Guguan falls outside this range with epsilon 205Tl = + 1.2. Coupled Cs, Tl and Pb systematics of Guguan lavas suggests that this heavy Tl isotope composition may be due to preferential degassing of isotopically light Tl. In general, the low Tl concentrations and limited isotopic range in the CIP lavas is likely due to the unexpectedly narrow range of epsilon 205Tl found in Mariana subduction inputs, coupled with volcaniclastic, rather than pelagic sediment as the dominant source of Tl. Much work remains to better understand the controls on Tl processing through a subduction zone. For example, Tl could be retained in residual phengite, offering the potential exploration of Cs/Tl ratios as a slab thermometer. However, data for Tl partitioning in phengite (and other micas) is required before developing this application further. Establishing a database of Tl concentrations and stable isotopes in subduction zone lavas with different thermal parameters and sedimentary inputs is required for the future use of Tl as a subduction zone tracer.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Technological and environmental problems related to ore processing are a serious limitation for sustainable development of mineral resources, particularly for countries / companies rich in ores, but with little access to sophisticated technology, e.g. in Latin America. Digital image analysis (DIA) can provide a simple, unexpensive and broadly applicable methodology to assess these problems, but this methodology has to be carefully defined, to produce reproducible and relevant information.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

El presente trabajo propone un método para la determinación de los valores de las tolerancias individuales de las piezas que forman un conjunto ensamblado a partir de valores de tolerancias especificados en el conjunto final, optimizando el coste total de fabricación de las piezas individuales a partir de funciones de coste-tolerancia basadas en el proceso de fabricación de cada una de ellas. Para ello se parte de los principales trabajos desarrollados en la línea de asignación de tolerancias y se realiza la propuesta del modelo de trabajo, basado en la optimización de costes a partir de la aplicación del método de los multiplicadores de Lagrange a diversas curvas de coste-tolerancia

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The research in this thesis is related to static cost and termination analysis. Cost analysis aims at estimating the amount of resources that a given program consumes during the execution, and termination analysis aims at proving that the execution of a given program will eventually terminate. These analyses are strongly related, indeed cost analysis techniques heavily rely on techniques developed for termination analysis. Precision, scalability, and applicability are essential in static analysis in general. Precision is related to the quality of the inferred results, scalability to the size of programs that can be analyzed, and applicability to the class of programs that can be handled by the analysis (independently from precision and scalability issues). This thesis addresses these aspects in the context of cost and termination analysis, from both practical and theoretical perspectives. For cost analysis, we concentrate on the problem of solving cost relations (a form of recurrence relations) into closed-form upper and lower bounds, which is the heart of most modern cost analyzers, and also where most of the precision and applicability limitations can be found. We develop tools, and their underlying theoretical foundations, for solving cost relations that overcome the limitations of existing approaches, and demonstrate superiority in both precision and applicability. A unique feature of our techniques is the ability to smoothly handle both lower and upper bounds, by reversing the corresponding notions in the underlying theory. For termination analysis, we study the hardness of the problem of deciding termination for a speci�c form of simple loops that arise in the context of cost analysis. This study gives a better understanding of the (theoretical) limits of scalability and applicability for both termination and cost analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Shopping centre is a long term investment in which Greenfield development decisions are often taken based on risks analysis regarding construction costs, location, competition, market and an expected DCF. Furthermore, integration between the building design, project planning, operational costs and investment analysis is not entirely considered by the investor at the decision making stage. The absence of such information tends to produce certain negative impacts on the future running costs and annual maintenance of the building, especially on energy demand and other occupancy expenses paid by the tenants to the landlord. From the investor´s point of view, this blind spot in strategy development will possibly decrease their profit margin as changes in the occupancy expenses[ ] have a direct outcome on the profit margin. In order to try to reduce some higher operating cost components such as energy use and other utility savings as well as their CO2 emissions, quite a few income properties worldwide have some type of environmental label such as BREEAM and LEED. The drawback identified in this labelling is that usually the investments required to get an ecolabel are high and the investor finds no direct evidence that it increases market value. However there is research on certified commercial properties (especially offices) that shows better performance in terms of occupancy rate and rental cost (Warren-Myers, 2012). Additionally, Sayce (2013) says that the certification only provides a quick reference point i.e. the lack of a certificate does not indicate that a building is not sustainable or efficient. Based on the issues described above, this research compares important components of the development stages such as investments costs, concept/ strategy development as well as the current investor income and property value. The subjects for this analysis are a shopping centre designed with passive cooling/bioclimatic strategies evaluated at the decision making stage, a certified regional shopping centre and a non-certified standard regional shopping centre. Moreover, the proposal intends to provide decision makers with some tools for linking green design features to the investment analysis in order to optimize the decision making process when looking into cost savings and design quality.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: To determine how small differences in the efficacy and cost of two antibiotic regimens to eradicate Helicobacter pylori can affect the overall cost effectiveness of H pylori eradication in duodenal ulcer disease.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Adult Attachment Projective Picture System (AAP) is the first performance- based measure of adult attachment to be developed. The purpose of the measure is to provide a clinical understanding of an adult client's attachment status and associated coping mechanisms. The AAP is a relatively new measure that has yet to be examined from a utility perspective. In the current study, seven psychologists completed a structured survey in order to identify their perspectives of the AAP and its utility as a clinical instrument. A phenomenological qualitative analysis of the data was conducted to derive themes about the AAP and its clinical utility. Analyses aimed to answer the following: What clinical considerations do clinician's focus on when deciding to use this measure? What are common factors among clinician's who do use the measure as well as those who do not? What aspects of the measure are user-friendly and what aspects are difficult? General themes that emerged include (a) the clinical information provided by the AAP is viewed by those who use it as unique and beneficial; (b) time commitment and cost for the clinician are common considerations when clinician's are deciding whether or not to use the AAP or when pursuing training; (c) the AAP provides an increased understanding of one's relational capacities and defenses; and (d) the coding system and transcription process are difficult aspects of the AAP and influence how and/or when it is used. In addition to these themes, multiple respondents discussed potential changes for the AAP that would increase their future use of the instrument. Finally, the implications of these results are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.