225 resultados para Tests accuracy


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, finite element analyses are usually done by means of commercial software tools. Accuracy of analysis and computational time are two important factors in efficiency of these tools. This paper studies the effective parameters in computational time and accuracy of finite element analyses performed by ANSYS and provides the guidelines for the users of this software whenever they us this software for study on deformation of orthopedic bone plates or study on similar cases. It is not a fundamental scientific study and only shares the findings of the authors about structural analysis by means of ANSYS workbench. It gives an idea to the readers about improving the performance of the software and avoiding the traps. The solutions provided in this paper are not the only possible solutions of the problems and in similar cases there are other solutions which are not given in this paper. The parameters of solution method, material model, geometric model, mesh configuration, number of the analysis steps, program controlled parameters and computer settings are discussed through thoroughly in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pull-through/local dimpling failure strength of screwed connections is very important in the design of profiled steel cladding systems to help them resist storms and hurricanes. The current American and European provisions recommend four different test methods for the screwed connections in tension, but the accuracy of these methods in determining the connection strength is not known. It is unlikely that the four test methods are equivalent in all cases and thus it is necessary to reduce the number of methods recommended. This paper presents a review of these test methods based on some laboratory tests on crest- and valley-fixed claddings and then recommends alternative tests methods that reproduce the real behavior of the connections, including the bending and membrane deformations of the cladding around the screw fasteners and the tension load in the fastener.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study explores the accuracy and valuation implications of the application of a comprehensive list of equity multiples in the takeover context. Motivating the study is the prevalent use of equity multiples in practice, the observed long-run underperformance of acquirers following takeovers, and the scarcity of multiplesbased research in the merger and acquisition setting. In exploring the application of equity multiples in this context three research questions are addressed: (1) how accurate are equity multiples (RQ1); which equity multiples are more accurate in valuing the firm (RQ2); and which equity multiples are associated with greater misvaluation of the firm (RQ3). Following a comprehensive review of the extant multiples-based literature it is hypothesised that the accuracy of multiples in estimating stock market prices in the takeover context will rank as follows (from best to worst): (1) forecasted earnings multiples, (2) multiples closer to bottom line earnings, (3) multiples based on Net Cash Flow from Operations (NCFO) and trading revenue. The relative inaccuracies in multiples are expected to flow through to equity misvaluation (as measured by the ratio of estimated market capitalisation to residual income value, or P/V). Accordingly, it is hypothesised that greater overvaluation will be exhibited for multiples based on Trading Revenue, NCFO, Book Value (BV) and earnings before interest, tax, depreciation and amortisation (EBITDA) versus multiples based on bottom line earnings; and that multiples based on Intrinsic Value will display the least overvaluation. The hypotheses are tested using a sample of 147 acquirers and 129 targets involved in Australian takeover transactions announced between 1990 and 2005. The results show that first, the majority of computed multiples examined exhibit valuation errors within 30 percent of stock market values. Second, and consistent with expectations, the results provide support for the superiority of multiples based on forecasted earnings in valuing targets and acquirers engaged in takeover transactions. Although a gradual improvement in estimating stock market values is not entirely evident when moving down the Income Statement, historical earnings multiples perform better than multiples based on Trading Revenue or NCFO. Third, while multiples based on forecasted earnings have the highest valuation accuracy they, along with Trading Revenue multiples for targets, produce the most overvalued valuations for acquirers and targets. Consistent with predictions, greater overvaluation is exhibited for multiples based on Trading Revenue for targets, and NCFO and EBITDA for both acquirers and targets. Finally, as expected, multiples based Intrinsic Value (along with BV) are associated with the least overvaluation. Given the widespread usage of valuation multiples in takeover contexts these findings offer a unique insight into their relative effectiveness. Importantly, the findings add to the growing body of valuation accuracy literature, especially within Australia, and should assist market participants to better understand the relative accuracy and misvaluation consequences of various equity multiples used in takeover documentation and assist them in subsequent investment decision making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The IEC 61850 family of standards for substation communication systems were released in the early 2000s, and include IEC 61850-8-1 and IEC 61850-9-2 that enable Ethernet to be used for process-level connections between transmission substation switchyards and control rooms. This paper presents an investigation of process bus protection performance, as the in-service behavior of multi-function process buses is largely unknown. An experimental approach was adopted that used a Real Time Digital Simulator and 'live' substation automation devices. The effect of sampling synchronization error and network traffic on transformer differential protection performance was assessed and compared to conventional hard-wired connections. Ethernet was used for all sampled value measurements, circuit breaker tripping, transformer tap-changer position reports and Precision Time Protocol synchronization of sampled value merging unit sampling. Test results showed that the protection relay under investigation operated correctly with process bus network traffic approaching 100% capacity. The protection system was not adversely affected by synchronizing errors significantly larger than the standards permit, suggesting these requirements may be overly conservative. This 'closed loop' approach, using substation automation hardware, validated the operation of protection relays under extreme conditions. Digital connections using a single shared Ethernet network outperformed conventional hard-wired solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many bridges, vertical displacements are one of the most relevant parameters for structural health monitoring in both the short- and long-terms. Bridge managers around the globe are always looking for a simple way to measure vertical displacements of bridges. However, it is difficult to carry out such measurements. On the other hand, in recent years, with the advancement of fibre-optic technologies, fibre Bragg grating (FBG) sensors are more commonly used in structural health monitoring due to their outstanding advantages including multiplexing capability, immunity of electromagnetic interference as well as high resolution and accuracy. For these reasons, a methodology for measuring the vertical displacements of bridges using FBG sensors is proposed. The methodology includes two approaches. One of which is based on curvature measurements while the other utilises inclination measurements from successfully developed FBG tilt sensors. A series of simulation tests of a full-scale bridge was conducted. It shows that both approaches can be implemented to measure the vertical displacements for bridges with various support conditions, varying stiffness along the spans and without any prior known loading. A static loading beam test with increasing loads at the mid-span and a beam test with different loading locations were conducted to measure vertical displacements using FBG strain sensors and tilt sensors. The results show that the approaches can successfully measure vertical displacements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advanced composite materials offer remarkable potential in the upgrade of civil engineering structures. The evolution of CFRP (carbon fibre reinforced polymer) technologies and their versatility for applications in civil constructions require comprehensive and reliable codes of practice. Guidelines are available on the rehabilitation and retrofit of concrete structures with advanced composite materials. However, there is a need to develop appropriate design guidelines for CFRP strengthened steel structures. It is important to understand the bond characteristics between CFRP and steel plates. This paper describes a series of double strap shear tests loaded in tension to investigate the bond between CFRP sheets and steel plates. Both normal modulus (240 GPa) and high modulus (640 GPa) CFRPs were used in the test program. Strain gauges were mounted to capture the strain distribution along the CFRP length. Different failure modes were observed for joints with normal modulus CFRP and those with high modulus CFRP. The strain distribution along the CFRP length is similar for the two cases. A shorter effective bond length was obtained for joints with high modulus CFRP whereas larger ultimate load carrying capacity can be achieved for joints with normal modulus CFRP when the bond length is long enough.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigated a range of factors underlying the impact of uncorrected refractive errors on laboratory-based tests related to driving. Results showed that refractive blur had a pronounced effect on recognition of briefly presented targets, particularly under low light conditions. Blur, in combination with audio distracters, also slowed a participant's reactions to road hazards in video presentations. This suggests that recognition of suddenly appearing road hazards might be slowed in the presence of refractive blur, particularly under conditions of distraction. These findings highlight the importance of correcting even small refractive errors for driving, particularly at night.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

iTRAQ (isobaric tags for relative or absolute quantitation) is a mass spectrometry technology that allows quantitative comparison of protein abundance by measuring peak intensities of reporter ions released from iTRAQ-tagged peptides by fragmentation during MS/MS. However, current data analysis techniques for iTRAQ struggle to report reliable relative protein abundance estimates and suffer with problems of precision and accuracy. The precision of the data is affected by variance heterogeneity: low signal data have higher relative variability; however, low abundance peptides dominate data sets. Accuracy is compromised as ratios are compressed toward 1, leading to underestimation of the ratio. This study investigated both issues and proposed a methodology that combines the peptide measurements to give a robust protein estimate even when the data for the protein are sparse or at low intensity. Our data indicated that ratio compression arises from contamination during precursor ion selection, which occurs at a consistent proportion within an experiment and thus results in a linear relationship between expected and observed ratios. We proposed that a correction factor can be calculated from spiked proteins at known ratios. Then we demonstrated that variance heterogeneity is present in iTRAQ data sets irrespective of the analytical packages, LC-MS/MS instrumentation, and iTRAQ labeling kit (4-plex or 8-plex) used. We proposed using an additive-multiplicative error model for peak intensities in MS/MS quantitation and demonstrated that a variance-stabilizing normalization is able to address the error structure and stabilize the variance across the entire intensity range. The resulting uniform variance structure simplifies the downstream analysis. Heterogeneity of variance consistent with an additive-multiplicative model has been reported in other MS-based quantitation including fields outside of proteomics; consequently the variance-stabilizing normalization methodology has the potential to increase the capabilities of MS in quantitation across diverse areas of biology and chemistry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim: To explore weight status perception and its relation to actual weight status in a contemporary cohort of 5- to 17-year-old children and adolescents. Methods: Body mass index (BMI), derived from height and weight measurements, and perception of weight status (‘too thin’, ‘about right’ and ‘too fat’) were evaluated in 3043 participants from the Healthy Kids Queensland Survey. In children less than 12 years of age, weight status perception was obtained from the parents, whereas the adolescents self-reported their perceived weight status. Results: Compared with measured weight status by established BMI cut-offs, just over 20% of parents underestimated their child's weight status and only 1% overestimated. Adolescent boys were more likely to underestimate their weight status compared with girls (26.4% vs. 10.2%, P < 0.05) whereas adolescent girls were more likely to overestimate than underestimate (11.8% vs. 3.4%, P < 0.05). Underestimation was greater by parents of overweight children compared with those of obese children, but still less than 50% of parents identified their obese child as ‘too fat’. There was greater recognition of overweight status in the adolescents, with 83% of those who were obese reporting they were ‘too fat’. Conclusion: Whilst there was a high degree of accuracy of weight status perception in those of healthy weight, there was considerable underestimation of weight status, particularly by parents of children who were overweight or obese. Strategies are required that enable parents to identify what a healthy weight looks like and help them understand when intervention is needed to prevent further weight gain as the child gets older.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Courts set guidelines for when genetic testing would be ordered - medical testing - life insurers - use of test results - confidentiality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An Artificial Neural Network (ANN) is a computational modeling tool which has found extensive acceptance in many disciplines for modeling complex real world problems. An ANN can model problems through learning by example, rather than by fully understanding the detailed characteristics and physics of the system. In the present study, the accuracy and predictive power of an ANN was evaluated in predicting kinetic viscosity of biodiesels over a wide range of temperatures typically encountered in diesel engine operation. In this model, temperature and chemical composition of biodiesel were used as input variables. In order to obtain the necessary data for model development, the chemical composition and temperature dependent fuel properties of ten different types of biodiesels were measured experimentally using laboratory standard testing equipments following internationally recognized testing procedures. The Neural Networks Toolbox of MatLab R2012a software was used to train, validate and simulate the ANN model on a personal computer. The network architecture was optimised following a trial and error method to obtain the best prediction of the kinematic viscosity. The predictive performance of the model was determined by calculating the absolute fraction of variance (R2), root mean squared (RMS) and maximum average error percentage (MAEP) between predicted and experimental results. This study found that ANN is highly accurate in predicting the viscosity of biodiesel and demonstrates the ability of the ANN model to find a meaningful relationship between biodiesel chemical composition and fuel properties at different temperature levels. Therefore the model developed in this study can be a useful tool in accurately predict biodiesel fuel properties instead of undertaking costly and time consuming experimental tests.