897 resultados para Load rejection test data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This document does NOT address the issue of oxygen data quality control (either real-time or delayed mode). As a preliminary step towards that goal, this document seeks to ensure that all countries deploying floats equipped with oxygen sensors document the data and metadata related to these floats properly. We produced this document in response to action item 14 from the AST-10 meeting in Hangzhou (March 22-23, 2009). Action item 14: Denis Gilbert to work with Taiyo Kobayashi and Virginie Thierry to ensure DACs are processing oxygen data according to recommendations. If the recommendations contained herein are followed, we will end up with a more uniform set of oxygen data within the Argo data system, allowing users to begin analysing not only their own oxygen data, but also those of others, in the true spirit of Argo data sharing. Indications provided in this document are valid as of the date of writing this document. It is very likely that changes in sensors, calibrations and conversions equations will occur in the future. Please contact V. Thierry (vthierry@ifremer.fr) for any inconsistencies or missing information. A dedicated webpage on the Argo Data Management website (www) contains all information regarding Argo oxygen data management : current and previous version of this cookbook, oxygen sensor manuals, calibration sheet examples, examples of matlab code to process oxygen data, test data, etc..

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mechanical fatigue is a failure phenomenon that occurs due to repeated application of mechanical loads. Very High Cycle Fatigue (VHCF) is considered as the domain of fatigue life greater than 10 million load cycles. Increasing numbers of structural components have service life in the VHCF regime, for instance in automotive and high speed train transportation, gas turbine disks, and components of paper production machinery. Safe and reliable operation of these components depends on the knowledge of their VHCF properties. In this thesis both experimental tools and theoretical modelling were utilized to develop better understanding of the VHCF phenomena. In the experimental part, ultrasonic fatigue testing at 20 kHz of cold rolled and hot rolled stainless steel grades was conducted and fatigue strengths in the VHCF regime were obtained. The mechanisms for fatigue crack initiation and short crack growth were investigated using electron microscopes. For the cold rolled stainless steels crack initiation and early growth occurred through the formation of the Fine Granular Area (FGA) observed on the fracture surface and in TEM observations of cross-sections. The crack growth in the FGA seems to control more than 90% of the total fatigue life. For the hot rolled duplex stainless steels fatigue crack initiation occurred due to accumulation of plastic fatigue damage at the external surface, and early crack growth proceeded through a crystallographic growth mechanism. Theoretical modelling of complex cracks involving kinks and branches in an elastic half-plane under static loading was carried out by using the Distributed Dislocation Dipole Technique (DDDT). The technique was implemented for 2D crack problems. Both fully open and partially closed crack cases were analyzed. The main aim of the development of the DDDT was to compute the stress intensity factors. Accuracy of 2% in the computations was attainable compared to the solutions obtained by the Finite Element Method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Use of Unmanned Aerial Vehicles (UAVs) in support of government applications has already seen significant growth and the potential for use of UAVs in commercial applications is expected to rapidly expand in the near future. However, the issue remains on how such automated or operator-controlled aircraft can be safely integrated into current airspace. If the goal of integration is to be realized, issues regarding safe separation in densely populated airspace must be investigated. This paper investigates automated separation management concepts in uncontrolled airspace that may help prepare for an expected growth of UAVs in Class G airspace. Not only are such investigations helpful for the UAV integration issue, the automated separation management concepts investigated by the authors can also be useful for the development of new or improved Air Traffic Control services in remote regions without any existing infrastructure. The paper will also provide an overview of the Smart Skies program and discuss the corresponding Smart Skies research and development effort to evaluate aircraft separation management algorithms using simulations involving realworld data communication channels, and verified against actual flight trials. This paper presents results from a unique flight test concept that uses real-time flight test data from Australia over existing commercial communication channels to a control center in Seattle for real-time separation management of actual and simulated aircraft. The paper also assesses the performance of an automated aircraft separation manager.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The motivation for secondary school principals in Queensland, Australia, to investigate curriculum change coincided with the commencement in 2005 of the state government’s publication of school exit test results as a measure of accountability. Aligning the schools’ curriculum with the requirements of high-stakes testing is considered by many academics and teachers as negative outcome of accountability for reasons such as ‘teaching to the test’ and narrowing the curriculum. However, this article outlines empirical evidence that principals are instigating curriculum change to improve published high-stakes test results. Three principals in this study offered several reasons as to why they wished to implement changes to school curricula. One reason articulated by all three was the pressures of accountability, particularly through the publication of high-stakes test data which has now become commonplace in education systems of many Western Nations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current graduates in education are entering a very different profession to the one in which most of their “baby-boomer” colleagues started. It is a profession in which accountability and national high-stakes testing (e.g. NAPLAN) have become catch-cries, and where the interpretation and use of educational data is an additional challenge. This has led to schools focusing on performance, and teachers now have to analyse test data and apply the findings to their teaching.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My oldest daughter recently secured a position as a Science/Geography teacher in a P-12 Catholic College in regional Queensland. This paper looks at the teaching world into which she has graduated. Specifically, the paper will outline and discuss findings from a survey of graduating early childhood student teachers in relation to their knowledge and skills of the current regime of high-stakes testing in Australia. The paper argues that understanding accountability and possessing skills to scrutinise test data are essential for the new teacher as s/he enters a profession in which governments world-wide are demanding a return for their investment in education. The paper will examine literature on accountability and surveillance in the form of high-stakes testing from global, school and classroom perspectives. It makes the claim that it is imperative for beginning teachers to be able to interpret high-stakes test data and considers the skills required to do this. The paper also draws on local research to comment on the readiness of graduates to meet this comparatively new professional demand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, three metaheuristics are proposed for solving a class of job shop, open shop, and mixed shop scheduling problems. We evaluate the performance of the proposed algorithms by means of a set of Lawrence’s benchmark instances for the job shop problem, a set of randomly generated instances for the open shop problem, and a combined job shop and open shop test data for the mixed shop problem. The computational results show that the proposed algorithms perform extremely well on all these three types of shop scheduling problems. The results also reveal that the mixed shop problem is relatively easier to solve than the job shop problem due to the fact that the scheduling procedure becomes more flexible by the inclusion of more open shop jobs in the mixed shop.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several track-before-detection approaches for image based aircraft detection have recently been examined in an important automated aircraft collision detection application. A particularly popular approach is a two stage processing paradigm which involves: a morphological spatial filter stage (which aims to emphasize the visual characteristics of targets) followed by a temporal or track filter stage (which aims to emphasize the temporal characteristics of targets). In this paper, we proposed new spot detection techniques for this two stage processing paradigm that fuse together raw and morphological images or fuse together various different morphological images (we call these approaches morphological reinforcement). On the basis of flight test data, the proposed morphological reinforcement operations are shown to offer superior signal to-noise characteristics when compared to standard spatial filter options (such as the close-minus-open and adaptive contour morphological operations). However, system operation characterised curves, which examine detection verses false alarm characteristics after both processing stages, illustrate that system performance is very data dependent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Nurse practitioner education and practice has been guided by generic competency standards in Australia since 2006. Development of specialist competencies has been less structured and there are no formal standards to guide education and continuing professional development for specialty fields. There is limited international research and no Australian research into development of specialist nurse practitioner competencies. This pilot study aimed to test data collection methods, tools and processes in preparation for a larger national study to investigate specialist competency standards for emergency nurse practitioners. Research into specialist emergency nurse practitioner competencies has not been conducted in Australia. Methods: Mixed methods research was conducted with a sample of experienced emergency nurse practitioners. Deductive analysis of data from a focus group workshop informed development of a draft specialty competency framework. The framework was subsequently subjected to systematic scrutiny for consensus validation through a two round Delphi Study. Results: The Delphi study first round had a 100% response rate; the second round 75% response rate. The scoring for all items in both rounds was above the 80% cut off mark with the lowest mean score being 4.1 (82%) from the first round. Conclusion: The authors collaborated with emergency nurse practitioners to produce preliminary data on the formation of specialty competencies as a first step in developing an Australian framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies the water resources of Laidley Creek catchment within the Lockyer Valley where groundwater is used for intensive irrigation of crops. A holistic approach was used to consider groundwater within the total water cycle. The project mapped the geology, measured stream flows and groundwater levels, and analysed the chemistry of the waters. These data were integrated within a catchment-wide conceptual model, including historic and rainfall records. From this a numerical simulation was produced to test data validity and develop predictions of behaviour, which can support management decisions, particularly in times of variable climate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing prevalence of obesity in society has been associated with a number of atherogenic risk factors such as insulin resistance. Aerobic training is often recommended as a strategy to induce weight loss, with a greater impact of high-intensity levels on cardiovascular function and insulin sensitivity, and a greater impact of moderate-intensity levels on fat oxidation. Anaerobic high-intensity (supramaximal) interval training has been advocated to improve cardiovascular function, insulin sensitivity and fat oxidation. However, obese individuals tend to have a lower tolerance of high-intensity exercise due to discomfort. Furthermore, some obese individuals may compensate for the increased energy expenditure by eating more and/or becoming less active. Recently, both moderate- and high-intensity aerobic interval training have been advocated as alternative approaches. However, it is still uncertain as to which approach is more effective in terms of increasing fat oxidation given the issues with levels of fitness and motivation, and compensatory behaviours. Accordingly, the objectives of this thesis were to compare the influence of moderate- and high-intensity interval training on fat oxidation and eating behaviour in overweight/obese men. Two exercise interventions were undertaken by 10-12 overweight/obese men to compare their responses to study variables, including fat oxidation and eating behaviour during moderate- and high-intensity interval training (MIIT and HIIT). The acute training intervention was a methodological study designed to examine the validity of using exercise intensity from the graded exercise test (GXT) - which measured the intensity that elicits maximal fat oxidation (FATmax) - to prescribe interval training during 30-min MIIT. The 30-min MIIT session involved 5-min repetitions of workloads 20% below and 20% above the FATmax. The acute intervention was extended to involve HIIT in a cross-over design to compare the influence of MIIT and HIIT on eating behaviour using subjective appetite sensation and food preference through the liking and wanting test. The HIIT consisted of 15-sec interval training at 85 %VO2peak interspersed by 15-sec unloaded recovery, with a total mechanical work equal to MIIT. The medium term training intervention was a cross-over 4-week (12 sessions) MIIT and HIIT exercise training with a 6-week detraining washout period. The MIIT sessions consisted of 5-min cycling stages at ±20% of mechanical work at 45 %VO2peak, and the HIIT sessions consisted of repetitive 30-sec work at 90 %VO2peak and 30-sec interval rests, during identical exercise sessions of between 30 and 45 mins. Assessments included a constant-load test (45 %VO2peak for 45 mins) followed by 60-min recovery at baseline and the end of 4-week training, to determine fat oxidation rate. Participants’ responses to exercise were measured using blood lactate (BLa), heart rate (HR) and rating of perceived exertion (RPE) and were measured during the constant-load test and in the first intervention training session of every week during training. Eating behaviour responses were assessed by measuring subjective appetite sensations, liking and wanting and ad libitum energy intake. Results of the acute intervention showed that FATmax is a valid method to estimate VO2 and BLa, but is not valid to estimate HR and RPE in the MIIT session. While the average rate of fat oxidation during 30-min MIIT was comparable with the rate of fat oxidation at FATmax (0.16 ±0.09 and 0.14 ±0.08 g/min, respectively), fat oxidation was significantly higher at minute 25 of MIIT (P≤0.01). In addition, there was no significant difference between MIIT and HIIT in the rate of appetite sensations after exercise, but there was a tendency towards a lower rate of hunger after HIIT. Different intensities of interval exercise also did not affect explicit liking or implicit wanting. Results of the medium-term intervention indicated that current interval training levels did not affect body composition, fasting insulin and fasting glucose. Maximal aerobic capacity significantly increased (P≤0.01) (2.8 and 7.0% after MIIT and HIIT respectively) during GXT, and fat oxidation significantly increased (P≤0.01) (96 and 43% after MIIT and HIIT respectively) during the acute constant-load exercise test. RPE significantly decreased after HIIT greater than MIIT (P≤0.05), and the decrease in BLa was greater during the constant-load test after HIIT than MIIT, but this difference did not reach statistical significance (P=0.09). In addition, following constant-load exercise, exercise-induced hunger and desire to eat decreased after HIIT greater than MIIT but were not significant (p value for desire to eat was 0.07). Exercise-induced liking of high-fat sweet (HFSW) and high-fat non-sweet (HFNS) foods increased after MIIT and decreased after HIIT (p value for HFNS was 0.09). The intervention explained 12.4% of the change in fat intake (p = 0.07). This research is significant in that it confirmed two points in the acute study. While the rate of fat oxidation increased during MIIT, the average rate of fat oxidation during 30-min MIIT was comparable with the rate of fat oxidation at FATmax. In addition, manipulating the intensity of acute interval exercise did not affect appetite sensations and liking and wanting. In the medium-term intervention, constant-load exercise-induced fat oxidation significantly increased after interval training, independent of exercise intensity. In addition, desire to eat, explicit liking for HFNS and fat intake collectively confirmed that MIIT is accompanied by a greater compensation of eating behaviour than HIIT. Findings from this research will assist in developing exercise strategies to provide obese men with various training options. In addition, the finding that overweight/obese men expressed a lower RPE and decreased BLa after HIIT compared with MIIT is contrary to the view that obese individuals may not tolerate high-intensity interval training. Therefore, high-intensity interval training can be advocated among the obese adult male population. Future studies may extend this work by using a longer-term intervention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background & aim To understand whether any change in gastric emptying (GE) is physiologically relevant, it is important to identify its variability. Information regarding the variability of GE in overweight and obese individuals is lacking. The aim of this study was to determine the reproducibility of GE in overweight and obese males. Methods Fifteen overweight and obese males [body mass index 30.3 (4.9) kg/m2] completed two identical GE tests 7 days apart. GE of a standard pancake breakfast was assessed by 13C-octanoic acid breath test. Data are presented as mean (±SD). Results There were no significant differences in GE between test days (half time (t1/2): 179 (15) and 176 (19 min), p = 0.56; lag time (tlag): 108 (14) and 104 (8) min, p = 0.26). Mean intra-individual coefficient of variation for t1/2 was 7.9% and tlag 7.5%. Based on these findings, to detect a treatment effect in a paired design with a power of 80% and α = 0.05, minimum mean effect sizes for t1/2 would need to be ≥14.4 min and tlag ≥ 8.1 min. Conclusions These data show that GE is reproducible in overweight and obese males and provide minimum mean effect sizes required to detect a hypothetical treatment effect in this population.