155 resultados para Experimental Analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a notable shortage of empirical research directed at measuring the magnitude and direction of stress effects on performance in a controlled environment. One reason for this is the inherent difficulties in identifying and isolating direct performance measures for individuals. Additionally most traditional work environments contain a multitude of exogenous factors impacting individual performance, but controlling for all such factors is generally unfeasible (omitted variable bias). Moreover, instead of asking individuals about their self-reported stress levels we observe workers' behavior in situations that can be classified as stressful. For this reason we have stepped outside the traditional workplace in an attempt to gain greater controllability of these factors using the sports environment as our experimental space. We empirically investigate the relationship between stress and performance, in an extreme pressure situation (football penalty kicks) in a winner take all sporting environment (FIFA World Cup and UEFA European Cup competitions). Specifically, we examine all the penalty shootouts between 1976 and 2008 covering in total 16 events. The results indicate that extreme stressors can have a positive or negative impact on Individuals' performance. On the other hand, more commonly experienced stressors do not affect professionals' performances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: In health related research, it is critical not only to demonstrate the efficacy of intervention, but to show that this is not due to chance or confounding variables. Content: Single case experimental design is a useful quasi-experimental design and method used to achieve these goals when there are limited participants and funds for research. This type of design has various advantages compared to group experimental designs. One such advantage is the capacity to focus on individual performance outcomes compared to group performance outcomes. Conclusions: This comprehensive review demonstrates the benefits and limitations of using single case experimental design, its various design methods, and data collection and analysis for research purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identifying crash “hotspots”, “blackspots”, “sites with promise”, or “high risk” locations is standard practice in departments of transportation throughout the US. The literature is replete with the development and discussion of statistical methods for hotspot identification (HSID). Theoretical derivations and empirical studies have been used to weigh the benefits of various HSID methods; however, a small number of studies have used controlled experiments to systematically assess various methods. Using experimentally derived simulated data—which are argued to be superior to empirical data, three hot spot identification methods observed in practice are evaluated: simple ranking, confidence interval, and Empirical Bayes. Using simulated data, sites with promise are known a priori, in contrast to empirical data where high risk sites are not known for certain. To conduct the evaluation, properties of observed crash data are used to generate simulated crash frequency distributions at hypothetical sites. A variety of factors is manipulated to simulate a host of ‘real world’ conditions. Various levels of confidence are explored, and false positives (identifying a safe site as high risk) and false negatives (identifying a high risk site as safe) are compared across methods. Finally, the effects of crash history duration in the three HSID approaches are assessed. The results illustrate that the Empirical Bayes technique significantly outperforms ranking and confidence interval techniques (with certain caveats). As found by others, false positives and negatives are inversely related. Three years of crash history appears, in general, to provide an appropriate crash history duration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an extended study on the implementation of support vector machine(SVM) based speaker verification in systems that employ continuous progressive model adaptation using the weight-based factor analysis model. The weight-based factor analysis model compensates for session variations in unsupervised scenarios by incorporating trial confidence measures in the general statistics used in the inter-session variability modelling process. Employing weight-based factor analysis in Gaussian mixture models (GMM) was recently found to provide significant performance gains to unsupervised classification. Further improvements in performance were found through the integration of SVM-based classification in the system by means of GMM supervectors. This study focuses particularly on the way in which a client is represented in the SVM kernel space using single and multiple target supervectors. Experimental results indicate that training client SVMs using a single target supervector maximises performance while exhibiting a certain robustness to the inclusion of impostor training data in the model. Furthermore, the inclusion of low-scoring target trials in the adaptation process is investigated where they were found to significantly aid performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main causes of above knee or transfemoral amputation (TFA) in the developed world is trauma to the limb. The number of people undergoing TFA due to limb trauma, particularly due to war injuries, has been increasing. Typically the trauma amputee population, including war-related amputees, are otherwise healthy, active and desire to return to employment and their usual lifestyle. Consequently there is a growing need to restore long-term mobility and limb function to this population. Traditionally transfemoral amputees are provided with an artificial or prosthetic leg that consists of a fabricated socket, knee joint mechanism and a prosthetic foot. Amputees have reported several problems related to the socket of their prosthetic limb. These include pain in the residual limb, poor socket fit, discomfort and poor mobility. Removing the socket from the prosthetic limb could eliminate or reduce these problems. A solution to this is the direct attachment of the prosthesis to the residual bone (femur) inside the residual limb. This technique has been used on a small population of transfemoral amputees since 1990. A threaded titanium implant is screwed in to the shaft of the femur and a second component connects between the implant and the prosthesis. A period of time is required to allow the implant to become fully attached to the bone, called osseointegration (OI), and be able to withstand applied load; then the prosthesis can be attached. The advantages of transfemoral osseointegration (TFOI) over conventional prosthetic sockets include better hip mobility, sitting comfort and prosthetic retention and fewer skin problems on the residual limb. However, due to the length of time required for OI to progress and to complete the rehabilitation exercises, it can take up to twelve months after implant insertion for an amputee to be able to load bear and to walk unaided. The long rehabilitation time is a significant disadvantage of TFOI and may be impeding the wider adoption of the technique. There is a need for a non-invasive method of assessing the degree of osseointegration between the bone and the implant. If such a method was capable of determining the progression of TFOI and assessing when the implant was able to withstand physiological load it could reduce the overall rehabilitation time. Vibration analysis has been suggested as a potential technique: it is a non destructive method of assessing the dynamic properties of a structure. Changes in the physical properties of a structure can be identified from changes in its dynamic properties. Consequently vibration analysis, both experimental and computational, has been used to assess bone fracture healing, prosthetic hip loosening and dental implant OI with varying degrees of success. More recently experimental vibration analysis has been used in TFOI. However further work is needed to assess the potential of the technique and fully characterise the femur-implant system. The overall aim of this study was to develop physical and computational models of the TFOI femur-implant system and use these models to investigate the feasibility of vibration analysis to detect the process of OI. Femur-implant physical models were developed and manufactured using synthetic materials to represent four key stages of OI development (identified from a physiological model), simulated using different interface conditions between the implant and femur. Experimental vibration analysis (modal analysis) was then conducted using the physical models. The femur-implant models, representing stage one to stage four of OI development, were excited and the modal parameters obtained over the range 0-5kHz. The results indicated the technique had limited capability in distinguishing between different interface conditions. The fundamental bending mode did not alter with interfacial changes. However higher modes were able to track chronological changes in interface condition by the change in natural frequency, although no one modal parameter could uniquely distinguish between each interface condition. The importance of the model boundary condition (how the model is constrained) was the key finding; variations in the boundary condition altered the modal parameters obtained. Therefore the boundary conditions need to be held constant between tests in order for the detected modal parameter changes to be attributed to interface condition changes. A three dimensional Finite Element (FE) model of the femur-implant model was then developed and used to explore the sensitivity of the modal parameters to more subtle interfacial and boundary condition changes. The FE model was created using the synthetic femur geometry and an approximation of the implant geometry. The natural frequencies of the FE model were found to match the experimental frequencies within 20% and the FE and experimental mode shapes were similar. Therefore the FE model was shown to successfully capture the dynamic response of the physical system. As was found with the experimental modal analysis, the fundamental bending mode of the FE model did not alter due to changes in interface elastic modulus. Axial and torsional modes were identified by the FE model that were not detected experimentally; the torsional mode exhibited the largest frequency change due to interfacial changes (103% between the lower and upper limits of the interface modulus range). Therefore the FE model provided additional information on the dynamic response of the system and was complementary to the experimental model. The small changes in natural frequency over a large range of interface region elastic moduli indicated the method may only be able to distinguish between early and late OI progression. The boundary conditions applied to the FE model influenced the modal parameters to a far greater extent than the interface condition variations. Therefore the FE model, as well as the experimental modal analysis, indicated that the boundary conditions need to be held constant between tests in order for the detected changes in modal parameters to be attributed to interface condition changes alone. The results of this study suggest that in a clinical setting it is unlikely that the in vivo boundary conditions of the amputated femur could be adequately controlled or replicated over time and consequently it is unlikely that any longitudinal change in frequency detected by the modal analysis technique could be attributed exclusively to changes at the femur-implant interface. Therefore further development of the modal analysis technique would require significant consideration of the clinical boundary conditions and investigation of modes other than the bending modes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modelling of water flow and associated deformation in unsaturated reactive soils (shrinking/swelling soils) is important in many applications. The current paper presents a method to capture soil swelling deformation during water infiltration using Particle Image Velocimetry (PIV). The model soil material used is a commercially available bentonite. A swelling chamber was setup to determine the water content profile and extent of soil swelling. The test was run for 61 days, and during this time period, the soil underwent on average across its width swelling of about 26% of the height of the soil column. PIV analysis was able to determine the amount of swelling that occurred within the entire face of the soil box that was used for observations. The swelling was most apparent in the top layers with strains in most cases over 100%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acoustic emission (AE) is the phenomenon where high frequency stress waves are generated by rapid release of energy within a material by sources such as crack initiation or growth. AE technique involves recording these stress waves by means of sensors placed on the surface and subsequent analysis of the recorded signals to gather information such as the nature and location of the source. It is one of the several diagnostic techniques currently used for structural health monitoring (SHM) of civil infrastructure such as bridges. Some of its advantages include ability to provide continuous in-situ monitoring and high sensitivity to crack activity. But several challenges still exist. Due to high sampling rate required for data capture, large amount of data is generated during AE testing. This is further complicated by the presence of a number of spurious sources that can produce AE signals which can then mask desired signals. Hence, an effective data analysis strategy is needed to achieve source discrimination. This also becomes important for long term monitoring applications in order to avoid massive date overload. Analysis of frequency contents of recorded AE signals together with the use of pattern recognition algorithms are some of the advanced and promising data analysis approaches for source discrimination. This paper explores the use of various signal processing tools for analysis of experimental data, with an overall aim of finding an improved method for source identification and discrimination, with particular focus on monitoring of steel bridges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, two ideal formation models of serrated chips, the symmetric formation model and the unilateral right-angle formation model, have been established for the first time. Based on the ideal models and related adiabatic shear theory of serrated chip formation, the theoretical relationship among average tooth pitch, average tooth height and chip thickness are obtained. Further, the theoretical relation of the passivation coefficient of chip's sawtooth and the chip thickness compression ratio is deduced as well. The comparison between these theoretical prediction curves and experimental data shows good agreement, which well validates the robustness of the ideal chip formation models and the correctness of the theoretical deducing analysis. The proposed ideal models may have provided a simple but effective theoretical basis for succeeding research on serrated chip morphology. Finally, the influences of most principal cutting factors on serrated chip formation are discussed on the basis of a series of finite element simulation results for practical advices of controlling serrated chips in engineering application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A computational fluid dynamics (CFD) analysis has been performed for a flat plate photocatalytic reactor using CFD code FLUENT. Under the simulated conditions (Reynolds number, Re around 2650), a detailed time accurate computation shows the different stages of flow evolution and the effects of finite length of the reactor in creating flow instability, which is important to improve the performance of the reactor for storm and wastewater reuse. The efficiency of a photocatalytic reactor for pollutant decontamination depends on reactor hydrodynamics and configurations. This study aims to investigate the role of different parameters on the optimization of the reactor design for its improved performance. In this regard, more modelling and experimental efforts are ongoing to better understand the interplay of the parameters that influence the performance of the flat plate photocatalytic reactor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Condition monitoring of diesel engines can prevent unpredicted engine failures and the associated consequence. This paper presents an experimental study of the signal characteristics of a 4-cylinder diesel engine under various loading conditions. Acoustic emission, vibration and in-cylinder pressure signals were employed to study the effectiveness of these techniques for condition monitoring and identifying symptoms of incipient failures. An event driven synchronous averaging technique was employed to average the quasi-periodic diesel engine signal in the time domain to eliminate or minimize the effect of engine speed and amplitude variations on the analysis of condition monitoring signal. It was shown that acoustic emission (AE) is a better technique than vibration method for condition monitor of diesel engines due to its ability to produce high quality signals (i.e., excellent signal to noise ratio) in a noisy diesel engine environment. It was found that the peak amplitude of AE RMS signals correlating to the impact-like combustion related events decreases in general due to a more stable mechanical process of the engine as the loading increases. A small shift in the exhaust valve closing time was observed as the engine load increases which indicates a prolong combustion process in the cylinder (to produce more power). On the contrary, peak amplitudes of the AE RMS attributing to fuel injection increase as the loading increases. This can be explained by the increase fuel friction caused by the increase volume flow rate during the injection. Multiple AE pulses during the combustion process were identified in the study, which were generated by the piston rocking motion and the interaction between the piston and the cylinder wall. The piston rocking motion is caused by the non-uniform pressure distribution acting on the piston head as a result of the non-linear combustion process of the engine. The rocking motion ceased when the pressure in the cylinder chamber stabilized.