906 resultados para Experimental performance metrics


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Aerobic Gymnastic is the ability to perform complex movements produced by the traditional aerobic exercises, in a continuous manner, with high intensity, perfectly integrated with soundtracks. This sport is performed in an aerobic/anaerobic lactacid condition and expects the execution of complex movements produced by the traditional aerobic exercises integrated with difficulty elements performed with a high technical level. An inaccuracy about this sport is related to the name itself “aerobic” because Aerobic Gymnastic does not use just the aerobic work during the competition, due to the fact that the exercises last among 1’30” and 1’45” at high rhythm. Agonistic Aerobics exploit the basic movements of amateur Aerobics and its coordination schemes, even though the agonistic Aerobics is so much intense than the amateur Aerobics to need a completely different mix of energetic mechanisms. Due to the complexity and the speed with which you perform the technical elements of Aerobic Gymnastic, the introduction of video analysis is essential for a qualitative and quantitative evaluation of athletes’ performance during the training. The performance analysis can allow the accurate analysis and explanation of the evolution and dynamics of a historical phenomenon and motor sports. The notational analysis is used by technicians to have an objective analysis of performance. Tactics, technique and individual movements can be analyzed to help coaches and athletes to re-evaluate their performance and gain advantage during the competition. The purpose of the following experimental work will be a starting point for analyzing the performance of the athletes in an objective way, not only during competitions, but especially during the phases of training. It is, therefore, advisable to introduce the video analysis and notational analysis for more quantitative and qualitative examination of technical movements. The goal is to lead to an improvement of the technique of the athlete and the teaching of the coach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

"DOT-VNTSC-FHWA-94-16."

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Federal Highway Administration, Office of Research, Washington, D.C.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The n-tuple recognition method is briefly reviewed, summarizing the main theoretical results. Large-scale experiments carried out on Stat-Log project datasets confirm this method as a viable competitor to more popular methods due to its speed, simplicity, and accuracy on the majority of a wide variety of classification problems. A further investigation into the failure of the method on certain datasets finds the problem to be largely due to a mismatch between the scales which describe generalization and data sparseness.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: As light-emitting diodes become more common as the light source for low vision aids, the effect of illumination colour temperature on magnifier reading performance was investigated. Methods: Reading ability (maximum reading speed, critical print size, threshold near visual acuity) using Radner charts and subjective preference was assessed for 107 participants with visual impairment using three stand magnifiers with light emitting diode illumination colour temperatures of 2,700 K, 4,500 K and 6,000 K. The results were compared with distance visual acuity, prescribed magnification, age and the primary cause of visual impairment. Results: Reading speed, critical print size and near visual acuity were unaffected by illumination colour temperature (p > 0.05). Reading metrics decreased with worsening acuity and higher levels of prescribed magnification but acuity was unaffected by age. Each colour temperature was preferred and disliked by a similar number of patients and was unrelated to distance visual acuity, prescribed magnification and age (p > 0.05). Patients had better near acuity (p = 0.002), critical print size (p = 0.034) and maximum reading speed (p <0.001), and the improvement in near from distance acuity was greater (p = 0.004) with their preferred rather than least-liked colour temperature illumination. Conclusion: A range of colour temperature illuminations should be offered to all visually impaired individuals prescribed with an optical magnifier for near tasks to optimise subjective and objective benefits.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

De-inking sludge can be converted into useful forms of energy to provide economic and environmental benefits. In this study, pyrolysis oil produced from de-inking sludge through an intermediate pyrolysis technique was blended with biodiesel derived from waste cooking oil, and tested in a multi-cylinder indirect injection type CI engine. The physical and chemical properties of pyrolysis oil and its blends (20 and 30 vol.%) were measured and compared with those of fossil diesel and pure biodiesel (B100). Full engine power was achieved with both blends, and very little difference in engine performance and emission results were observed between 20% and 30% blends. At full engine load, the brake specific fuel consumption on a volume basis was around 6% higher for the blends when compared to fossil diesel. The brake thermal efficiencies were about 3-6% lower than biodiesel and were similar to fossil diesel. Exhaust gas emissions of the blends contained 4% higher CO2 and 6-12% lower NOx, as compared to fossil diesel. At full load, CO emissions of the blends were decreased by 5-10 times. The cylinder gas pressure diagram showed stable engine operation with the 20% blend, but indicated minor knocking with 30% blend. Peak cylinder pressure of the 30% blend was about 5-6% higher compared to fossil diesel. At full load, the peak burn rate of combustion from the 30% blend was about 26% and 12% higher than fossil diesel and biodiesel respectively. In comparison to fossil diesel the combustion duration was decreased for both blends; for 30% blend at full load, the duration was almost 12% lower. The study concludes that up to 20% blend of de-inking sludge pyrolysis oil with biodiesel can be used in an indirect injection CI engine without adding any ignition additives or surfactants.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

External metrology systems are increasingly being integrated with traditional industrial articulated robots, especially in the aerospace industries, to improve their absolute accuracy for precision operations such as drilling, machining and jigless assembly. While currently most of the metrology assisted robotics control systems are limited in their position update rate, such that the robot has to be stopped in order to receive a metrology coordinate update, some recent efforts are addressed toward controlling robots using real-time metrology data. The indoor GPS is one of the metrology systems that may be used to provide real-time 6DOF data to a robot controller. Even if there is a noteworthy literature dealing with the evaluation of iGPS performance, there is, however, a lack of literature on how well the iGPS performs under dynamic conditions. This paper presents an experimental evaluation of the dynamic measurement performance of the iGPS, tracking the trajectories of an industrial robot. The same experiment is also repeated using a laser tracker. Besides the experiment results presented, this paper also proposes a novel method for dynamic repeatability comparisons of tracking instruments. © 2011 Springer-Verlag London Limited.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Communication through relay channels in wireless sensor networks can create diversity and consequently improve the robustness of data transmission for ubiquitous computing and networking applications. In this paper, we investigate the performances of relay channels in terms of diversity gain and throughput via both experimental research and theoretical analysis. Two relaying algorithms, dynamic relaying and fixed relaying, are utilised and tested to find out what the relay channels can contribute to system performances. The tests are based on a wireless relay sensor network comprising a source node, a destination node and a couple of relay nodes, and carried out in an indoor environment with rare movement of objects nearby. The tests confirm, in line with the analytical results, that more relay nodes lead to higher diversity gain in the network. The test results also show that the data throughput between the source node and the destination node is enhanced by the presence of the relay nodes. Energy consumption in association with the relaying strategy is also analysed. Copyright © 2009 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Back-pressure on a diesel engine equipped with an aftertreatment system is a function of the pressure drop across the individual components of the aftertreatment system, typically, a diesel oxidation catalyst (DOC), catalyzed particulate filter (CPF) and selective catalytic reduction (SCR) catalyst. Pressure drop across the CPF is a function of the mass flow rate and the temperature of the exhaust flowing through it as well as the mass of particulate matter (PM) retained in the substrate wall and the cake layer that forms on the substrate wall. Therefore, in order to control the back-pressure on the engine at low levels and to minimize the fuel consumption, it is important to control the PM mass retained in the CPF. Chemical reactions involving the oxidation of PM under passive oxidation and active regeneration conditions can be utilized with computer numerical models in the engine control unit (ECU) to control the pressure drop across the CPF. Hence, understanding and predicting the filtration and oxidation of PM in the CPF and the effect of these processes on the pressure drop across the CPF are necessary for developing control strategies for the aftertreatment system to reduce back-pressure on the engine and in turn fuel consumption particularly from active regeneration. Numerical modeling of CPF's has been proven to reduce development time and the cost of aftertreatment systems used in production as well as to facilitate understanding of the internal processes occurring during different operating conditions that the particulate filter is subjected to. A numerical model of the CPF was developed in this research work which was calibrated to data from passive oxidation and active regeneration experiments in order to determine the kinetic parameters for oxidation of PM and nitrogen oxides along with the model filtration parameters. The research results include the comparison between the model and the experimental data for pressure drop, PM mass retained, filtration efficiencies, CPF outlet gas temperatures and species (NO2) concentrations out of the CPF. Comparisons of PM oxidation reaction rates obtained from the model calibration to the data from the experiments for ULSD, 10 and 20% biodiesel-blended fuels are presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fibre Reinforced Concretes are innovative composite materials whose applications are growing considerably nowadays. Being composite materials, their performance depends on the mechanical properties of both components, fibre and matrix and, above all, on the interface. The variables to account for the mechanical characterization of the material, could be proper of the material itself, i.e. fibre and concrete type, or external factors, i.e. environmental conditions. The first part of the research presented is focused on the experimental and numerical characterization of the interface properties and short term response of fibre reinforced concretes with macro-synthetic fibers. The experimental database produced represents the starting point for numerical models calibration and validation with two principal purposes: the calibration of a local constitutive law and calibration and validation of a model predictive of the whole material response. In the perspective of the design of sustainable admixtures, the optimization of the matrix of cement-based fibre reinforced composites is realized with partial substitution of the cement amount. In the second part of the research, the effect of time dependent phenomena on MSFRCs response is studied. An extended experimental campaign of creep tests is performed analysing the effect of time and temperature variations in different loading conditions. On the results achieved, a numerical model able to account for the viscoelastic nature of both concrete and reinforcement, together with the environmental conditions, is calibrated with the LDPM theory. Different type of regression models are also elaborated correlating the mechanical properties investigated, bond strength and residual flexural behaviour, regarding the short term analysis and creep coefficient on time, for the time dependent behaviour, with the variable investigated. The experimental studies carried out emphasize the several aspects influencing the material mechanical performance allowing also the identification of those properties that the numerical approach should consider in order to be reliable.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Numerous types of acute respiratory failure are routinely treated using non-invasive ventilatory support (NIV). Its efficacy is well documented: NIV lowers intubation and death rates in various respiratory disorders. It can be delivered by means of face masks or head helmets. Currently the scientific community’s interest about NIV helmets is mostly focused on optimising the mixing between CO2 and clean air and on improving patient comfort. To this end, fluid dynamic analysis plays a particularly important role and a two- pronged approach is frequently employed. While on one hand numerical simulations provide information about the entire flow field and different geometries, they exhibit require huge temporal and computational resources. Experiments on the other hand help to validate simulations and provide results with a much smaller time investment and thus remain at the core of research in fluid dynamics. The aim of this thesis work was to develop a flow bench and to utilise it for the analysis of NIV helmets. A flow test bench and an instrumented mannequin were successfully designed, produced and put into use. Experiments were performed to characterise the helmet interface in terms of pressure drop and flow rate drop over different inlet flow rates and outlet pressure set points. Velocity measurements by means of Particle Image Velocimetry were performed. Pressure drop and flow rate characteristics from experiments were contrasted with CFD data and sufficient agreement was observed between both numerical and experimental results. PIV studies permitted qualitative and quantitative comparisons with numerical simulation data and offered a clear picture of the internal flow behaviour, aiding the identification of coherent flow features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pathological mechanisms underlying cognitive dysfunction in multiple sclerosis (MS) are not yet fully understood and, in addition to demyelinating lesions and gray-matter atrophy, subclinical disease activity may play a role. To evaluate the contribution of asymptomatic gadolinium-enhancing lesions to cognitive dysfunction along with gray-matter damage and callosal atrophy in relapsing-remitting MS (RRMS) patients. Forty-two treated RRMS and 30 controls were evaluated. MRI (3T) variables of interest were brain white-matter and cortical lesion load, cortical and deep gray-matter volumes, corpus callosum volume and presence of gadolinium-enhancing lesions. Outcome variables included EDSS, MS Functional Composite (MSFC) subtests and the Brief Repeatable Battery of Neuropsychological tests. Cognitive dysfunction was classified as deficits in two or more cognitive subtests. Multivariate regression analyses assessed the contribution of MRI metrics to outcomes. Patients with cognitive impairment (45.2%) had more cortical lesions and lower gray-matter and callosal volumes. Patients with subclinical MRI activity (15%) had worse cognitive performance. Clinical disability on MSFC was mainly associated with putaminal atrophy. The main independent predictors for cognitive deficits were high burden of cortical lesions and number of gadolinium-enhancing lesions. Cognitive dysfunction was especially related to high burden of cortical lesions and subclinical disease activity. Cognitive studies in MS should look over subclinical disease activity as a potential contributor to cognitive impairment.