794 resultados para cost analysis
Resumo:
Given the increasing cost of designing and building new highway pavements, reliability analysis has become vital to ensure that a given pavement performs as expected in the field. Recognizing the importance of failure analysis to safety, reliability, performance, and economy, back analysis has been employed in various engineering applications to evaluate the inherent uncertainties of the design and analysis. The probabilistic back analysis method formulated on Bayes' theorem and solved using the Markov chain Monte Carlo simulation method with a Metropolis-Hastings algorithm has proved to be highly efficient to address this issue. It is also quite flexible and is applicable to any type of prior information. In this paper, this method has been used to back-analyze the parameters that influence the pavement life and to consider the uncertainty of the mechanistic-empirical pavement design model. The load-induced pavement structural responses (e.g., stresses, strains, and deflections) used to predict the pavement life are estimated using the response surface methodology model developed based on the results of linear elastic analysis. The failure criteria adopted for the analysis were based on the factor of safety (FOS), and the study was carried out for different sample sizes and jumping distributions to estimate the most robust posterior statistics. From the posterior statistics of the case considered, it was observed that after approximately 150 million standard axle load repetitions, the mean values of the pavement properties decrease as expected, with a significant decrease in the values of the elastic moduli of the expected layers. An analysis of the posterior statistics indicated that the parameters that contribute significantly to the pavement failure were the moduli of the base and surface layer, which is consistent with the findings from other studies. After the back analysis, the base modulus parameters show a significant decrease of 15.8% and the surface layer modulus a decrease of 3.12% in the mean value. The usefulness of the back analysis methodology is further highlighted by estimating the design parameters for specified values of the factor of safety. The analysis revealed that for the pavement section considered, a reliability of 89% and 94% can be achieved by adopting FOS values of 1.5 and 2, respectively. The methodology proposed can therefore be effectively used to identify the parameters that are critical to pavement failure in the design of pavements for specified levels of reliability. DOI: 10.1061/(ASCE)TE.1943-5436.0000455. (C) 2013 American Society of Civil Engineers.
Resumo:
The optimal tradeoff between average service cost rate and average delay, is addressed for a M/M/1 queueing model with queue-length dependent service rates, chosen from a finite set. We provide an asymptotic characterization of the minimum average delay, when the average service cost rate is a small positive quantity V more than the minimum average service cost rate required for stability. We show that depending on the value of the arrival rate, the assumed service cost rate function, and the possible values of the service rates, the minimum average delay either a) increases only to a finite value, b) increases without bound as log(1/V), or c) increases without bound as 1/V, when V down arrow 0. We apply the analysis to a flow-level resource allocation model for a wireless downlink. We also investigate the asymptotic tradeoff for a sequence of policies which are obtained from an approximate fluid model for the M/M/1 queue.
Resumo:
Grid-connected inverters require a third-order LCL filter to meet standards such as the IEEE Std. 519-1992 while being compact and cost-effective. LCL filter introduces resonance, which needs to be damped through active or passive methods. Passive damping schemes have less control complexity and are more reliable. This study explores the split-capacitor resistive-inductive (SC-RL) passive damping scheme. The SC-RL damped LCL filter is modelled using state space approach. Using this model, the power loss and damping are analysed. Based on the analysis, the SC-RL scheme is shown to have lower losses than other simpler passive damping methods. This makes the SC-RL scheme suitable for high power applications. A method for component selection that minimises the power loss in the damping resistors while keeping the system well damped is proposed. The design selection takes into account the influence of switching frequency, resonance frequency and the choice of inductance and capacitance values of the filter on the damping component selection. The use of normalised parameters makes it suitable for a wide range of design applications. Analytical results show the losses and quality factor to be in the range of 0.05-0.1% and 2.0-2.5, respectively, which are validated experimentally.
Resumo:
In the search for newer distributed phases that can be used in Ni-composite coatings, inexpensive and naturally available pumice has been identified as a potential candidate material. The composition of the pumice mineral as determined by Rietveld analysis shows the presence of corundum, quartz, mulllite, moganite and coesite phases. Pumice stone is crushed, ball-milled, dried and dispersed in a nickel sulfamate bath and Ni-pumice coatings are electrodeposited at different current densities and magnetic agitation speeds. Pumice particles are uniformly incorporated in the nickel matrix and Ni-pumice composite coatings with microhardness as high as 540 HK are obtained at the lowest applied current density. In the electrodeposited Ni-pumice coatings, the grain size of Ni increases with the applied current density. The overall intensity of texture development is slightly stronger for the Ni-pumice composite coating compared to plain Ni coating and the texture evolution is possibly not the strongest deciding factor for the enhanced properties of Ni-pumice coatings. The wear and oxidation resistances of Ni-pumice coating are commensurate with that of Ni-SiC coating electrodeposited under similar conditions. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Developing countries constantly face the challenge of reliably matching electricity supply to increasing consumer demand. The traditional policy decisions of increasing supply and reducing demand centrally, by building new power plants and/or load shedding, have been insufficient. Locally installed microgrids along with consumer demand response can be suitable decentralized options to augment the centralized grid based systems and plug the demand-supply gap. The objectives of this paper are to: (1) develop a framework to identify the appropriate decentralized energy options for demand supply matching within a community, and, (2) determine which of these options can suitably plug the existing demand-supply gap at varying levels of grid unavailability. A scenario analysis framework is developed to identify and assess the impact of different decentralized energy options at a community level and demonstrated for a typical urban residential community Vijayanagar, Bangalore in India. A combination of LPG based CHP microgrid and proactive demand response by the community is the appropriate option that enables the Vijayanagar community to meet its energy needs 24/7 in a reliable, cost-effective manner. The paper concludes with an enumeration of the barriers and feasible strategies for the implementation of community microgrids in India based on stakeholder inputs. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Response analysis of a linear structure with uncertainties in both structural parameters and external excitation is considered here. When such an analysis is carried out using the spectral stochastic finite element method (SSFEM), often the computational cost tends to be prohibitive due to the rapid growth of the number of spectral bases with the number of random variables and the order of expansion. For instance, if the excitation contains a random frequency, or if it is a general random process, then a good approximation of these excitations using polynomial chaos expansion (PCE) involves a large number of terms, which leads to very high cost. To address this issue of high computational cost, a hybrid method is proposed in this work. In this method, first the random eigenvalue problem is solved using the weak formulation of SSFEM, which involves solving a system of deterministic nonlinear algebraic equations to estimate the PCE coefficients of the random eigenvalues and eigenvectors. Then the response is estimated using a Monte Carlo (MC) simulation, where the modal bases are sampled from the PCE of the random eigenvectors estimated in the previous step, followed by a numerical time integration. It is observed through numerical studies that this proposed method successfully reduces the computational burden compared with either a pure SSFEM of a pure MC simulation and more accurate than a perturbation method. The computational gain improves as the problem size in terms of degrees of freedom grows. It also improves as the timespan of interest reduces.
Resumo:
Thin films of Cu2SnS3 (CTS) were deposited by the facile solution processed sol-gel route followed by a low-temperature annealing. The Cu-Sn-thiourea complex formation was analysed using Fourier Transform Infrared spectrophotometer (FTIR). The various phase transformations and the deposition temperature range for the initial precursor solution was determined using Thermogravimetric analysis (TGA) and Differential Scanning Calorimetry (DSC). X-Ray Diffraction (XRD) studies revealed the tetragonal phase formation of the CTS annealed films. Raman spectroscopy studies further confirmed the tetragonal phase formation and the absence of any deterioratory secondary phases. The morphological investigations and compositional analysis of the films were determined using Scanning Electron Microscopy (SEM) and Energy Dispersive Spectroscopy (EDS) respectively. Atomic Force Microscopy (AFM) was used to estimate the surface roughness of 1.3 nm. The absorption coefficient was found to be 10(4) cm(-1) and bandgap 1.3 eV which qualifies CTS to be a potential candidate for photovoltaic applications. The refractive index, extinction coefficient and relative permittivity of the film were measured by Spectroscopic ellipsometry. Hall effect measurements, indicated the p type nature of the films with a hole concentration of 2 x 10(18) cm(-3), electrical conductivity of 9 S/cm and a hole mobility of 29 cm(2)/V. The properties of CTS as deduced from the current study, present CTS as a potential absorber layer material for thin film solar cells. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
On the analysis of Varian’s textbook on Microeconomics, which I take to be a representative of the standard view, I argue that Varian provides two contrary notions of profit, namely, profit as surplus over cost and profit as cost. Varian starts by defining profit as the surplus of revenues over cost and, thus, as the part of the value of commodities that is not any cost; however, he provides a second definition of profit as a cost, namely, as the opportunity cost of capital. I also argue that the definition of competitive profit as the opportunity cost of capital involves a self-contradictory notion of opportunity cost.
Resumo:
This paper estimates a new measure of liquidity costs in a market driven by orders. It represents thecost of simultaneously buying and selling a given amount of shares, and it is given by a single measure of ex-ante liquidity that aggregates all available information in the limit order book for a given number of shares. The cost of liquidity is an increasing function relating bid-ask spreads with the amounts available for trading. This measure completely characterizes the cost of liquidity of any given asset. It does not suffer from the usual ambiguities related to either the bid-ask spread or depth when they are considered separately. On the contrary, with a single measure, we are able to capture all dimensions of liquidity costs on ex-ante basis.
Resumo:
Contributed to: "Measuring the Changes": 13th FIG International Symposium on Deformation Measurements and Analysis; 4th IAG Symposium on Geodesy for Geotechnical and Structural Enginering (Lisbon, Portugal, May 12-15, 2008).
Resumo:
The European Commission Report on Competition in Professional Services found that recommended prices by professional bodies have a significant negative effect on competition since they may facilitate the coordination of prices between service providers and/or mislead consumers about reasonable price levels. Professional associations argue, first, that a fee schedule may help their members to properly calculate the cost of services avoiding excessive charges and reducing consumers’ searching costs and, second, that recommended prices are very useful for cost appraisal if a litigant is condemned to pay the legal expenses of the opposing party. Thus, recommended fee schedules could be justified to some extent if they represented the cost of providing the services. We test this hypothesis using cross‐section data on a subset of recommended prices by 52 Spanish bar associations and cost data on their territorial jurisdictions. Our empirical results indicate that prices recommended by bar associations are unrelated to the cost of legal services and therefore we conclude that recommended prices have merely an anticompetitive effect.
Resumo:
Introduction: The National Oceanic and Atmospheric Administration’s Biogeography Branch has conducted surveys of reef fish in the Caribbean since 1999. Surveys were initially undertaken to identify essential fish habitat, but later were used to characterize and monitor reef fish populations and benthic communities over time. The Branch’s goals are to develop knowledge and products on the distribution and ecology of living marine resources and provide resource managers, scientists and the public with an improved ecosystem basis for making decisions. The Biogeography Branch monitors reef fishes and benthic communities in three study areas: (1) St. John, USVI, (2) Buck Island, St. Croix, USVI, and (3) La Parguera, Puerto Rico. In addition, the Branch has characterized the reef fish and benthic communities in the Flower Garden Banks National Marine Sanctuary, Gray’s Reef National Marine Sanctuary and around the island of Vieques, Puerto Rico. Reef fish data are collected using a stratified random sampling design and stringent measurement protocols. Over time, the sampling design has changed in order to meet different management objectives (i.e. identification of essential fish habitat vs. monitoring), but the designs have always remained: • Probabilistic – to allow inferences to a larger targeted population, • Objective – to satisfy management objectives, and • Stratified – to reduce sampling costs and obtain population estimates for strata. There are two aspects of the sampling design which are now under consideration and are the focus of this report: first, the application of a sample frame, identified as a set of points or grid elements from which a sample is selected; and second, the application of subsampling in a two-stage sampling design. To evaluate these considerations, the pros and cons of implementing a sampling frame and subsampling are discussed. Particular attention is paid to the impacts of each design on accuracy (bias), feasibility and sampling cost (precision). Further, this report presents an analysis of data to determine the optimal number of subsamples to collect if subsampling were used. (PDF contains 19 pages)
Resumo:
The mapping and geospatial analysis of benthic environments are multidisciplinary tasks that have become more accessible in recent years because of advances in technology and cost reductions in survey systems. The complex relationships that exist among physical, biological, and chemical seafloor components require advanced, integrated analysis techniques to enable scientists and others to visualize patterns and, in so doing, allow inferences to be made about benthic processes. Effective mapping, analysis, and visualization of marine habitats are particularly important because the subtidal seafloor environment is not readily viewed directly by eye. Research in benthic environments relies heavily, therefore, on remote sensing techniques to collect effective data. Because many benthic scientists are not mapping professionals, they may not adequately consider the links between data collection, data analysis, and data visualization. Projects often start with clear goals, but may be hampered by the technical details and skills required for maintaining data quality through the entire process from collection through analysis and presentation. The lack of technical understanding of the entire data handling process can represent a significant impediment to success. While many benthic mapping efforts have detailed their methodology as it relates to the overall scientific goals of a project, only a few published papers and reports focus on the analysis and visualization components (Paton et al. 1997, Weihe et al. 1999, Basu and Saxena 1999, Bruce et al. 1997). In particular, the benthic mapping literature often briefly describes data collection and analysis methods, but fails to provide sufficiently detailed explanation of particular analysis techniques or display methodologies so that others can employ them. In general, such techniques are in large part guided by the data acquisition methods, which can include both aerial and water-based remote sensing methods to map the seafloor without physical disturbance, as well as physical sampling methodologies (e.g., grab or core sampling). The terms benthic mapping and benthic habitat mapping are often used synonymously to describe seafloor mapping conducted for the purpose of benthic habitat identification. There is a subtle yet important difference, however, between general benthic mapping and benthic habitat mapping. The distinction is important because it dictates the sequential analysis and visualization techniques that are employed following data collection. In this paper general seafloor mapping for identification of regional geologic features and morphology is defined as benthic mapping. Benthic habitat mapping incorporates the regional scale geologic information but also includes higher resolution surveys and analysis of biological communities to identify the biological habitats. In addition, this paper adopts the definition of habitats established by Kostylev et al. (2001) as a “spatially defined area where the physical, chemical, and biological environment is distinctly different from the surrounding environment.” (PDF contains 31 pages)
Resumo:
Seventy percent of the world's catch of fish and fishery products is consumed as food. Fish and shellfish products represent 15.6 percent of animal protein supply and 5.6 percent of total protein supply on a worldwide basis. Developing countries account for almost 50 percent of global fish exports. Seafood-borne disease or illness outbreaks affect consumers both physically and financially, and create regulatory problems for both importing and exporting countries. Seafood safety as a commodity cannot be purchased in the marketplace and government intervenes to regulate the safety and quality of seafood. Theoretical issues and data limitations create problems in estimating what consumers will pay for seafood safety and quality. The costs and benefits of seafood safety must be considered at all levels, including the fishers, fish farmers, input suppliers to fishing, processing and trade, seafood processors, seafood distributors, consumers and government. Hazard Analysis Critical Control Point (HACCP) programmes are being implemented on a worldwide basis for seafood. Studies have been completed to estimate the cost of HACCP in various shrimp, fish and shellfish plants in the United States, and are underway for some seafood plants in the United Kingdom, Canada and Africa. Major developments within the last two decades have created a set of complex trading situations for seafood. Current events indicate that seafood safety and quality can be used as non-tariff barriers to free trade. Research priorities necessary to estimate the economic value and impacts of achieving safer seafood are outlined at the consumer, seafood production and processing, trade and government levels. An extensive list of references on the economics of seafood safety and quality is presented. (PDF contains 56 pages; captured from html.)
Resumo:
EXECUTIVE SUMMARY: The Coastal Change Analysis Programl (C-CAP) is developing a nationally standardized database on landcover and habitat change in the coastal regions of the United States. C-CAP is part of the Estuarine Habitat Program (EHP) of NOAA's Coastal Ocean Program (COP). C-CAP inventories coastal submersed habitats, wetland habitats, and adjacent uplands and monitors changes in these habitats on a one- to five-year cycle. This type of information and frequency of detection are required to improve scientific understanding of the linkages of coastal and submersed wetland habitats with adjacent uplands and with the distribution, abundance, and health of living marine resources. The monitoring cycle will vary according to the rate and magnitude of change in each geographic region. Satellite imagery (primarily Landsat Thematic Mapper), aerial photography, and field data are interpreted, classified, analyzed, and integrated with other digital data in a geographic information system (GIS). The resulting landcover change databases are disseminated in digital form for use by anyone wishing to conduct geographic analysis in the completed regions. C-CAP spatial information on coastal change will be input to EHP conceptual and predictive models to support coastal resource policy planning and analysis. CCAP products will include 1) spatially registered digital databases and images, 2) tabular summaries by state, county, and hydrologic unit, and 3) documentation. Aggregations to larger areas (representing habitats, wildlife refuges, or management districts) will be provided on a case-by-case basis. Ongoing C-CAP research will continue to explore techniques for remote determination of biomass, productivity, and functional status of wetlands and will evaluate new technologies (e.g. remote sensor systems, global positioning systems, image processing algorithms) as they become available. Selected hardcopy land-cover change maps will be produced at local (1:24,000) to regional scales (1:500,000) for distribution. Digital land-cover change data will be provided to users for the cost of reproduction. Much of the guidance contained in this document was developed through a series of professional workshops and interagency meetings that focused on a) coastal wetlands and uplands; b) coastal submersed habitat including aquatic beds; c) user needs; d) regional issues; e) classification schemes; f) change detection techniques; and g) data quality. Invited participants included technical and regional experts and representatives of key State and Federal organizations. Coastal habitat managers and researchers were given an opportunity for review and comment. This document summarizes C-CAP protocols and procedures that are to be used by scientists throughout the United States to develop consistent and reliable coastal change information for input to the C-CAP nationwide database. It also provides useful guidelines for contributors working on related projects. It is considered a working document subject to periodic review and revision.(PDF file contains 104 pages.)