993 resultados para Restorable load estimation
Resumo:
The main idea of the Load-Unload Response Ratio (LURR) is that when a system is stable, its response to loading corresponds to its response to unloading, whereas when the system is approaching an unstable state, the response to loading and unloading becomes quite different. High LURR values and observations of Accelerating Moment/Energy Release (AMR/AER) prior to large earthquakes have led different research groups to suggest intermediate-term earthquake prediction is possible and imply that the LURR and AMR/AER observations may have a similar physical origin. To study this possibility, we conducted a retrospective examination of several Australian and Chinese earthquakes with magnitudes ranging from 5.0 to 7.9, including Australia's deadly Newcastle earthquake and the devastating Tangshan earthquake. Both LURR values and best-fit power-law time-to-failure functions were computed using data within a range of distances from the epicenter. Like the best-fit power-law fits in AMR/AER, the LURR value was optimal using data within a certain epicentral distance implying a critical region for LURR. Furthermore, LURR critical region size scales with mainshock magnitude and is similar to the AMR/AER critical region size. These results suggest a common physical origin for both the AMR/AER and LURR observations. Further research may provide clues that yield an understanding of this mechanism and help lead to a solid foundation for intermediate-term earthquake prediction.
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
A number of authors concerned with the analysis of rock jointing have used the idea that the joint areal or diametral distribution can be linked to the trace length distribution through a theorem attributed to Crofton. This brief paper seeks to demonstrate why Crofton's theorem need not be used to link moments of the trace length distribution captured by scan line or areal mapping to the moments of the diametral distribution of joints represented as disks and that it is incorrect to do so. The valid relationships for areal or scan line mapping between all the moments of the trace length distribution and those of the joint size distribution for joints modeled as disks are recalled and compared with those that might be applied were Crofton's theorem assumed to apply. For areal mapping, the relationship is fortuitously correct but incorrect for scan line mapping.
Resumo:
Objective: To compare percentage body fat (%BF) for a given body mass index (BMI) among New Zealand European, Maori and Pacific Island children. To develop prediction equations based on bioimpedance measurements for the estimation of fat-free mass (FFM) appropriate to children in these three ethnic groups. Design: Cross-sectional study. Purposive sampling of schoolchildren aimed at recruiting three children of each sex and ethnicity for each year of age. Double cross-validation of FFM prediction equations developed by multiple regression. Setting: Local schools in Auckland. Subjects: Healthy European, Maori and Pacific Island children (n = 172, 83 M, 89 F, mean age 9.4 +/- 2.8(s. d.), range 5 - 14 y). Measurements: Height, weight, age, sex and ethnicity were recorded. FFM was derived from measurements of total body water by deuterium dilution and resistance and reactance were measured by bioimpedance analysis. Results: For fixed BMI, the Maori and Pacific Island girls averaged 3.7% lower % BF than European girls. For boys a similar relation was not found since BMI did not significantly influence % BF of European boys ( P = 0.18). Based on bioimpedance measurements a single prediction equation was developed for all children: FFM (kg) = 0.622 height (cm)(2)/ resistance +0.234 weight (kg)+1.166, R-2 = 0.96, s. e. e. = 2.44 kg. Ethnicity, age and sex were not significant predictors. Conclusions: A robust equation for estimation of FFM in New Zealand European, Maori and Pacific Island children in the 5 - 14 y age range that is more suitable than BMI for the determination of body fatness in field studies has been developed. Sponsorship: Maurice and Phyllis Paykel Trust, Auckland University of Technology Contestable Grants Fund and the Ministry of Health.
Resumo:
This paper is concerned with evaluating the performance of loss networks. Accurate determination of loss network performance can assist in the design and dimen- sioning of telecommunications networks. However, exact determination can be difficult and generally cannot be done in reasonable time. For these reasons there is much interest in developing fast and accurate approximations. We develop a reduced load approximation that improves on the famous Erlang fixed point approximation (EFPA) in a variety of circumstances. We illustrate our results with reference to a range of networks for which the EFPA may be expected to perform badly.
Resumo:
For modern consumer cameras often approximate calibration data is available, making applications such as 3D reconstruction or photo registration easier as compared to the pure uncalibrated setting. In this paper we address the setting with calibrateduncalibrated image pairs: for one image intrinsic parameters are assumed to be known, whereas the second view has unknown distortion and calibration parameters. This situation arises e.g. when one would like to register archive imagery to recently taken photos. A commonly adopted strategy for determining epipolar geometry is based on feature matching and minimal solvers inside a RANSAC framework. However, only very few existing solutions apply to the calibrated-uncalibrated setting. We propose a simple and numerically stable two-step scheme to first estimate radial distortion parameters and subsequently the focal length using novel solvers. We demonstrate the performance on synthetic and real datasets.
Resumo:
The aim of this paper was to estimate the return on investment in QMS (quality management systems) certification undertaken in Portuguese firms, according to the ISO 9000 series. A total of 426 certified Portuguese firms were surveyed. The response rate was 61.03 percent. The different payback periods were validated through statistical analysis and the relationship between expected and perceived payback periods was discussed. This study suggests that a firm’s sector of activity, size and degree of internationalization are related to the length of the investment in QMS certification recovery period. Furthermore, our findings suggest, that the time taken to obtain the certification is not directly related to the economic component of the certification. The majority of Portuguese firms (58.9%) took up to three years to recoup their investment and 35.5% of companies said they had not yet recovered the initial investment made. The recoup of investment was measured by the increase in the number of customers and consequent volume of deliveries, improved profitability and productivity of the company, improvement of competitive position and performance (cost savings), reduction in the number of external complaints and internal defects/scrap, achievement of some important clientele, among others. We compared our work to similar studies undertaken in other countries. This paper provides a contribution to the research related to the return on investment for costs related to the certification QMS according to ISO 9000. This paper provides a valuable contribution to the field and is one of the first studies to undertake this type of analysis in Portugal.
Resumo:
In this paper, we present a method for estimating local thickness distribution in nite element models, applied to injection molded and cast engineering parts. This method features considerable improved performance compared to two previously proposed approaches, and has been validated against thickness measured by di erent human operators. We also demonstrate that the use of this method for assigning a distribution of local thickness in FEM crash simulations results in a much more accurate prediction of the real part performance, thus increasing the bene ts of computer simulations in engineering design by enabling zero-prototyping and thus reducing product development costs. The simulation results have been compared to experimental tests, evidencing the advantage of the proposed method. Thus, the proposed approach to consider local thickness distribution in FEM crash simulations has high potential on the product development process of complex and highly demanding injection molded and casted parts and is currently being used by Ford Motor Company.
Resumo:
Minimally invasive cardiovascular interventions guided by multiple imaging modalities are rapidly gaining clinical acceptance for the treatment of several cardiovascular diseases. These images are typically fused with richly detailed pre-operative scans through registration techniques, enhancing the intra-operative clinical data and easing the image-guided procedures. Nonetheless, rigid models have been used to align the different modalities, not taking into account the anatomical variations of the cardiac muscle throughout the cardiac cycle. In the current study, we present a novel strategy to compensate the beat-to-beat physiological adaptation of the myocardium. Hereto, we intend to prove that a complete myocardial motion field can be quickly recovered from the displacement field at the myocardial boundaries, therefore being an efficient strategy to locally deform the cardiac muscle. We address this hypothesis by comparing three different strategies to recover a dense myocardial motion field from a sparse one, namely, a diffusion-based approach, thin-plate splines, and multiquadric radial basis functions. Two experimental setups were used to validate the proposed strategy. First, an in silico validation was carried out on synthetic motion fields obtained from two realistic simulated ultrasound sequences. Then, 45 mid-ventricular 2D sequences of cine magnetic resonance imaging were processed to further evaluate the different approaches. The results showed that accurate boundary tracking combined with dense myocardial recovery via interpolation/ diffusion is a potentially viable solution to speed up dense myocardial motion field estimation and, consequently, to deform/compensate the myocardial wall throughout the cardiac cycle. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant’s pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant’s pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant’s main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant’s pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67±34μm and 108μm, and angular misfits of 0.15±0.08º and 1.4º, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants’ pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
Resumo:
The portfolio generating the iTraxx EUR index is modeled by coupled Markov chains. Each of the industries of the portfolio evolves according to its own Markov transition matrix. Using a variant of the method of moments, the model parameters are estimated from a data set of Standard and Poor's. Swap spreads are evaluated by Monte-Carlo simulations. Along with an actuarially fair spread, at least squares spread is considered.
Resumo:
A definition of medium voltage (MV) load diagrams was made, based on the data base knowledge discovery process. Clustering techniques were used as support for the agents of the electric power retail markets to obtain specific knowledge of their customers’ consumption habits. Each customer class resulting from the clustering operation is represented by its load diagram. The Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) were applied to an electricity consumption data from a utility client’s database in order to form the customer’s classes and to find a set of representative consumption patterns. The WEACS approach is a clustering ensemble combination approach that uses subsampling and that weights differently the partitions in the co-association matrix. As a complementary step to the WEACS approach, all the final data partitions produced by the different variations of the method are combined and the Ward Link algorithm is used to obtain the final data partition. Experiment results showed that WEACS approach led to better accuracy than many other clustering approaches. In this paper the WEACS approach separates better the customer’s population than Two-step clustering algorithm.