960 resultados para Competing risks, Estimation of predator mortality, Over dispersion, Stochastic modeling
Estimation of productivity in Korean electric power plants:a semiparametric smooth coefficient model
Resumo:
This paper analyzes the impact of load factor, facility and generator types on the productivity of Korean electric power plants. In order to capture important differences in the effect of load policy on power output, we use a semiparametric smooth coefficient (SPSC) model that allows us to model heterogeneous performances across power plants and over time by allowing underlying technologies to be heterogeneous. The SPSC model accommodates both continuous and discrete covariates. Various specification tests are conducted to compare performance of the SPSC model. Using a unique generator level panel dataset spanning the period 1995-2006, we find that the impact of load factor, generator and facility types on power generation varies substantially in terms of magnitude and significance across different plant characteristics. The results have strong implication for generation policy in Korea as outlined in this study.
Estimation of productivity in Korean electric power plants:a semiparametric smooth coefficient model
Resumo:
This paper analyzes the impact of load factor, facility and generator types on the productivity of Korean electric power plants. In order to capture important differences in the effect of load policy on power output, we use a semiparametric smooth coefficient (SPSC) model that allows us to model heterogeneous performances across power plants and over time by allowing underlying technologies to be heterogeneous. The SPSC model accommodates both continuous and discrete covariates. Various specification tests are conducted to assess the performance of the SPSC model. Using a unique generator level panel dataset spanning the period 1995-2006, we find that the impact of load factor, generator and facility types on power generation varies substantially in terms of magnitude and significance across different plant characteristics. The results have strong implications for generation policy in Korea as outlined in this study.
Resumo:
A computer code system for simulation and estimation of branching processes is proposed. Using the system, samples for some models with or without migration are generated. Over these samples we compare some properties of various estimators.
Resumo:
2010 Mathematics Subject Classification: 62J99.
Resumo:
Technology changes rapidly over years providing continuously more options for computer alternatives and making life easier for economic, intra-relation or any other transactions. However, the introduction of new technology “pushes” old Information and Communication Technology (ICT) products to non-use. E-waste is defined as the quantities of ICT products which are not in use and is bivariate function of the sold quantities, and the probability that specific computers quantity will be regarded as obsolete. In this paper, an e-waste generation model is presented, which is applied to the following regions: Western and Eastern Europe, Asia/Pacific, Japan/Australia/New Zealand, North and South America. Furthermore, cumulative computer sales were retrieved for selected countries of the regions so as to compute obsolete computer quantities. In order to provide robust results for the forecasted quantities, a selection of forecasting models, namely (i) Bass, (ii) Gompertz, (iii) Logistic, (iv) Trend model, (v) Level model, (vi) AutoRegressive Moving Average (ARMA), and (vii) Exponential Smoothing were applied, depicting for each country that model which would provide better results in terms of minimum error indices (Mean Absolute Error and Mean Square Error) for the in-sample estimation. As new technology does not diffuse in all the regions of the world with the same speed due to different socio-economic factors, the lifespan distribution, which provides the probability of a certain quantity of computers to be considered as obsolete, is not adequately modeled in the literature. The time horizon for the forecasted quantities is 2014-2030, while the results show a very sharp increase in the USA and United Kingdom, due to the fact of decreasing computer lifespan and increasing sales.
Resumo:
Annual average daily traffic (AADT) is important information for many transportation planning, design, operation, and maintenance activities, as well as for the allocation of highway funds. Many studies have attempted AADT estimation using factor approach, regression analysis, time series, and artificial neural networks. However, these methods are unable to account for spatially variable influence of independent variables on the dependent variable even though it is well known that to many transportation problems, including AADT estimation, spatial context is important. ^ In this study, applications of geographically weighted regression (GWR) methods to estimating AADT were investigated. The GWR based methods considered the influence of correlations among the variables over space and the spatially non-stationarity of the variables. A GWR model allows different relationships between the dependent and independent variables to exist at different points in space. In other words, model parameters vary from location to location and the locally linear regression parameters at a point are affected more by observations near that point than observations further away. ^ The study area was Broward County, Florida. Broward County lies on the Atlantic coast between Palm Beach and Miami-Dade counties. In this study, a total of 67 variables were considered as potential AADT predictors, and six variables (lanes, speed, regional accessibility, direct access, density of roadway length, and density of seasonal household) were selected to develop the models. ^ To investigate the predictive powers of various AADT predictors over the space, the statistics including local r-square, local parameter estimates, and local errors were examined and mapped. The local variations in relationships among parameters were investigated, measured, and mapped to assess the usefulness of GWR methods. ^ The results indicated that the GWR models were able to better explain the variation in the data and to predict AADT with smaller errors than the ordinary linear regression models for the same dataset. Additionally, GWR was able to model the spatial non-stationarity in the data, i.e., the spatially varying relationship between AADT and predictors, which cannot be modeled in ordinary linear regression. ^
Resumo:
The applications of micro-end-milling operations have increased recently. A Micro-End-Milling Operation Guide and Research Tool (MOGART) package has been developed for the study and monitoring of micro-end-milling operations. It includes an analytical cutting force model, neural network based data mapping and forecasting processes, and genetic algorithms based optimization routines. MOGART uses neural networks to estimate tool machinability and forecast tool wear from the experimental cutting force data, and genetic algorithms with the analytical model to monitor tool wear, breakage, run-out, cutting conditions from the cutting force profiles. ^ The performance of MOGART has been tested on the experimental data of over 800 experimental cases and very good agreement has been observed between the theoretical and experimental results. The MOGART package has been applied to the micro-end-milling operation study of Engineering Prototype Center of Radio Technology Division of Motorola Inc. ^
Resumo:
Tall buildings are wind-sensitive structures and could experience high wind-induced effects. Aerodynamic boundary layer wind tunnel testing has been the most commonly used method for estimating wind effects on tall buildings. Design wind effects on tall buildings are estimated through analytical processing of the data obtained from aerodynamic wind tunnel tests. Even though it is widely agreed that the data obtained from wind tunnel testing is fairly reliable the post-test analytical procedures are still argued to have remarkable uncertainties. This research work attempted to assess the uncertainties occurring at different stages of the post-test analytical procedures in detail and suggest improved techniques for reducing the uncertainties. Results of the study showed that traditionally used simplifying approximations, particularly in the frequency domain approach, could cause significant uncertainties in estimating aerodynamic wind-induced responses. Based on identified shortcomings, a more accurate dual aerodynamic data analysis framework which works in the frequency and time domains was developed. The comprehensive analysis framework allows estimating modal, resultant and peak values of various wind-induced responses of a tall building more accurately. Estimating design wind effects on tall buildings also requires synthesizing the wind tunnel data with local climatological data of the study site. A novel copula based approach was developed for accurately synthesizing aerodynamic and climatological data up on investigating the causes of significant uncertainties in currently used synthesizing techniques. Improvement of the new approach over the existing techniques was also illustrated with a case study on a 50 story building. At last, a practical dynamic optimization approach was suggested for tuning structural properties of tall buildings towards attaining optimum performance against wind loads with less number of design iterations.
Resumo:
The applications of micro-end-milling operations have increased recently. A Micro-End-Milling Operation Guide and Research Tool (MOGART) package has been developed for the study and monitoring of micro-end-milling operations. It includes an analytical cutting force model, neural network based data mapping and forecasting processes, and genetic algorithms based optimization routines. MOGART uses neural networks to estimate tool machinability and forecast tool wear from the experimental cutting force data, and genetic algorithms with the analytical model to monitor tool wear, breakage, run-out, cutting conditions from the cutting force profiles. The performance of MOGART has been tested on the experimental data of over 800 experimental cases and very good agreement has been observed between the theoretical and experimental results. The MOGART package has been applied to the micro-end-milling operation study of Engineering Prototype Center of Radio Technology Division of Motorola Inc.
Resumo:
The Auger Engineering Radio Array (AERA) is part of the Pierre Auger Observatory and is used to detect the radio emission of cosmic-ray air showers. These observations are compared to the data of the surface detector stations of the Observatory, which provide well-calibrated information on the cosmic-ray energies and arrival directions. The response of the radio stations in the 30-80 MHz regime has been thoroughly calibrated to enable the reconstruction of the incoming electric field. For the latter, the energy deposit per area is determined from the radio pulses at each observer position and is interpolated using a two-dimensional function that takes into account signal asymmetries due to interference between the geomagnetic and charge-excess emission components. The spatial integral over the signal distribution gives a direct measurement of the energy transferred from the primary cosmic ray into radio emission in the AERA frequency range. We measure 15.8 MeV of radiation energy for a 1 EeV air shower arriving perpendicularly to the geomagnetic field. This radiation energy-corrected for geometrical effects-is used as a cosmic-ray energy estimator. Performing an absolute energy calibration against the surface-detector information, we observe that this radio-energy estimator scales quadratically with the cosmic-ray energy as expected for coherent emission. We find an energy resolution of the radio reconstruction of 22% for the data set and 17% for a high-quality subset containing only events with at least five radio stations with signal.
Resumo:
A recently developed technique for determining past sea surface temperatures (SST), based on an analysis of the unsaturation ratio of long chain C37 methyl alkenones produced by Prymnesiophyceae phytoplankton (U37 k' ), has been applied to an upper Quaternary sediment core from the equatorial Atlantic. U37 k' temperature estimates were compared to those obtained from delta18O of the planktonic foraminifer Globigerinoides sacculifer and of planktonic foraminiferal assemblages for the last glacial cycle. The alkenone method showed 1.8°C cooling at the last glacial maximum, about 1/2 to 1/3 of the decrease shown by the isotopic method (6.3°C) and foraminiferal modern analogue technique estimates for the warm season (3.8°C). Warm season foraminiferal assemblage estimates based on transfer functions are out of phase with the other estimates, showing a 1.4°C drop at the last glacial maximum with an additional 0.9°C drop in the deglaciation. Increased alkenone abundances, total organic carbon percentage and foraminiferal accumulation rates in the last glaciation indicate an increase in productivity of as much as 4 times over present day. These changes are thought to be due to increased upwelling caused by enhanced winds during the glaciation. If U37 k' estimates are correct, as much as 50-70% (up to 4.5°C) of estimated delta18O and modern analogue temperature changes in the last glaciation may have been due to changes in thermocline depth, whereas transfer functions seem more strongly influenced by seasonality changes. This indicates these estimates may be influenced as strongly by other factors as they are by SST, which in the equatorial Atlantic was only reduced slightly in the last glaciation.
Resumo:
The map representation of an environment should be selected based on its intended application. For example, a geometrically accurate map describing the Euclidean space of an environment is not necessarily the best choice if only a small subset its features are required. One possible subset is the orientations of the flat surfaces in the environment, represented by a special parameterization of normal vectors called axes. Devoid of positional information, the entries of an axis map form a non-injective relationship with the flat surfaces in the environment, which results in physically distinct flat surfaces being represented by a single axis. This drastically reduces the complexity of the map, but retains important information about the environment that can be used in meaningful applications in both two and three dimensions. This thesis presents axis mapping, which is an algorithm that accurately and automatically estimates an axis map of an environment based on sensor measurements collected by a mobile platform. Furthermore, two major applications of axis maps are developed and implemented. First, the LiDAR compass is a heading estimation algorithm that compares measurements of axes with an axis map of the environment. Pairing the LiDAR compass with simple translation measurements forms the basis for an accurate two-dimensional localization algorithm. It is shown that this algorithm eliminates the growth of heading error in both indoor and outdoor environments, resulting in accurate localization over long distances. Second, in the context of geotechnical engineering, a three-dimensional axis map is called a stereonet, which is used as a tool to examine the strength and stability of a rock face. Axis mapping provides a novel approach to create accurate stereonets safely, rapidly, and inexpensively compared to established methods. The non-injective property of axis maps is leveraged to probabilistically describe the relationships between non-sequential measurements of the rock face. The automatic estimation of stereonets was tested in three separate outdoor environments. It is shown that axis mapping can accurately estimate stereonets while improving safety, requiring significantly less time and effort, and lowering costs compared to traditional and current state-of-the-art approaches.
Resumo:
This report discusses the calculation of analytic second-order bias techniques for the maximum likelihood estimates (for short, MLEs) of the unknown parameters of the distribution in quality and reliability analysis. It is well-known that the MLEs are widely used to estimate the unknown parameters of the probability distributions due to their various desirable properties; for example, the MLEs are asymptotically unbiased, consistent, and asymptotically normal. However, many of these properties depend on an extremely large sample sizes. Those properties, such as unbiasedness, may not be valid for small or even moderate sample sizes, which are more practical in real data applications. Therefore, some bias-corrected techniques for the MLEs are desired in practice, especially when the sample size is small. Two commonly used popular techniques to reduce the bias of the MLEs, are ‘preventive’ and ‘corrective’ approaches. They both can reduce the bias of the MLEs to order O(n−2), whereas the ‘preventive’ approach does not have an explicit closed form expression. Consequently, we mainly focus on the ‘corrective’ approach in this report. To illustrate the importance of the bias-correction in practice, we apply the bias-corrected method to two popular lifetime distributions: the inverse Lindley distribution and the weighted Lindley distribution. Numerical studies based on the two distributions show that the considered bias-corrected technique is highly recommended over other commonly used estimators without bias-correction. Therefore, special attention should be paid when we estimate the unknown parameters of the probability distributions under the scenario in which the sample size is small or moderate.
Resumo:
Group testing has long been considered as a safe and sensible relative to one-at-a-time testing in applications where the prevalence rate p is small. In this thesis, we applied Bayes approach to estimate p using Beta-type prior distribution. First, we showed two Bayes estimators of p from prior on p derived from two different loss functions. Second, we presented two more Bayes estimators of p from prior on π according to two loss functions. We also displayed credible and HPD interval for p. In addition, we did intensive numerical studies. All results showed that the Bayes estimator was preferred over the usual maximum likelihood estimator (MLE) for small p. We also presented the optimal β for different p, m, and k.
Resumo:
Tall buildings are wind-sensitive structures and could experience high wind-induced effects. Aerodynamic boundary layer wind tunnel testing has been the most commonly used method for estimating wind effects on tall buildings. Design wind effects on tall buildings are estimated through analytical processing of the data obtained from aerodynamic wind tunnel tests. Even though it is widely agreed that the data obtained from wind tunnel testing is fairly reliable the post-test analytical procedures are still argued to have remarkable uncertainties. This research work attempted to assess the uncertainties occurring at different stages of the post-test analytical procedures in detail and suggest improved techniques for reducing the uncertainties. Results of the study showed that traditionally used simplifying approximations, particularly in the frequency domain approach, could cause significant uncertainties in estimating aerodynamic wind-induced responses. Based on identified shortcomings, a more accurate dual aerodynamic data analysis framework which works in the frequency and time domains was developed. The comprehensive analysis framework allows estimating modal, resultant and peak values of various wind-induced responses of a tall building more accurately. Estimating design wind effects on tall buildings also requires synthesizing the wind tunnel data with local climatological data of the study site. A novel copula based approach was developed for accurately synthesizing aerodynamic and climatological data up on investigating the causes of significant uncertainties in currently used synthesizing techniques. Improvement of the new approach over the existing techniques was also illustrated with a case study on a 50 story building. At last, a practical dynamic optimization approach was suggested for tuning structural properties of tall buildings towards attaining optimum performance against wind loads with less number of design iterations.