975 resultados para fixed-time AI
Resumo:
INTRODUCTION: In alpine skiing, chronometry analysis is currently the most common tool to assess performance. It is widely used to rank competitors during races, as well as to manage athletes training and to evaluate material. Usually, this measurement is accurately realized using timing cells. Nevertheless, these devices are too complex and expensive to allow chronometry of every gates crossing. On the other side, differential GPS can be used for measuring gate crossing time (Waegli et al). However, this is complex (e.g. recording gate position with GPS) and mainly used in research applications. The aim of the study was to propose a wearable system to time gates crossing during alpine skiing slalom (SL), which is suitable for routine uses. METHODS: The proposed system was composed of a 3D accelerometer (ADXL320®, Analog Device, USA) placed at the sacrum of the athlete, a matrix of force sensors (Flexiforce®, Tekscan, USA) fixed on the right shin guard and a data logger (Physilog®, BioAGM, Switzerland). The sensors were sampled at 500 Hz. The crossing time were calculated in two phases. First, the accelerometer was used to detect the curves by considering the maximum of the mediolateral peak acceleration. Then, the force sensors were used to detect the impacts with the gates by considering maximum force variation. In case of non impact, the detection was realized based on the acceleration and features measured at the other gates. In order to assess the efficiency of the system, two different SL were monitored twice for two world cup level skiers, a male SL expert and a female downhill expert. RESULTS AND DISCUSSION: The combination of the accelerometer and force sensors allowed to clearly identify the gate crossing times. When comparing the runs of the SL expert and the downhill expert, we noticed that the SL expert was faster. For example for the first SL, the overall difference between the best run of each athlete was of 5.47s. At each gate, the SL expert increased the time difference slower at the beginning (0.27s/gate) than at the end (0.34s/gate). Furthermore, when comparing the runs of the SL expert, a maximum time difference of 20ms at each gate was noticed. This showed high repeatability skills of the SL expert. In opposite, the downhill expert with a maximum difference time of 1s at each gate was clearly less repeatable. Both skiers were not disturbed by the system. CONCLUSION: This study proposed a new wearable system to automatically time gates crossing during alpine skiing slalom combining force and accelerometer sensors. The system was evaluated with two professional world cup skiers and showed a high potential. This system could be extended to time other parameters. REFERENCES Waegli A, Skaloud J (2007). Inside GNSS, Spring, 24-34.
Resumo:
In this study it was evaluated the start-up procedures of anaerobic treatment system with three horizontal anaerobic reactors (R1, R2 and R3), installed in series, with volume of 1.2 L each. R1 had sludge blanket, and R2 and R3 had half supporter of bamboo and coconut fiber, respectively. As an affluent, it was synthesized wastewater from mechanical pulping of the coffee fruit by wet method, with a mean value of total chemical oxygen demand (CODtotal) of 16,003 mg L-1. The hydraulic retention time (HRT) in each reactor was 30 h. The volumetric organic loading (VOL) applied in R1 varied from 8.9 to 25.0 g of CODtotal (L d)-1. The mean removal efficiencies of CODtotal varied from 43 to 97% in the treatment system (R1+R2+R3), stabilizing above 80% after 30 days of operation. The mean content of methane in the biogas were of 70 to 76%, the mean volumetric production was 1.7 L CH4 (L reactor d)-1 in the system, and the higher conversions were around at 0.20 L CH4 (g CODremoved)-1 in R1 and R2. The mean values of pH in the effluents ranged from 6.8 to 8.3 and the mean values of total volatile acids remained below 200 mg L-1 in the effluent of R3. The concentrations of total phenols of the affluent ranged from 45 to 278 mg L-1, and the mean removal efficiency was of 52%. The start-up of the anaerobic treatment system occurred after 30 days of operation as a result of inoculation with anaerobic sludge with active microbiota.
Resumo:
Attempting to associate waste treatment to the production of clean and renewable energy, this research sought to evaluate the biological production of hydrogen using wastewater from the cassava starch treatment industry, generated during the processes of extraction and purification of starch. This experiment was carried out in a continuous anaerobic reactor with a working volume of 3L, with bamboo stems as the support medium. The system was operated at a temperature of 36°C, an initial pH of 6.0 and under variations of organic load. The highest rate of hydrogen production, of 1.1 L.d-1.L-1, was obtained with application of an organic loading rate of 35 g.L-1.d-1, in terms of total sugar content and hydraulic retention time of 3h, with a prevalence of butyric and acetic acids as final products of the fermentation process. Low C/N ratios contributed to the excessive growth of the biomass, causing a reduction of up to 35% in hydrogen production, low percentages of H2 and high concentrations of CO2in the biogas.
Resumo:
Abstract:Two ultrasound based fertility prediction methods were tested prior to embryo transfer (ET) and artificial insemination (AI) in cattle. Female bovines were submitted to estrous synchronization prior to ET and AI. Animals were scanned immediately before ET and AI procedure to target follicle and corpus luteum (CL) size and vascularity. In addition, inseminated animals were also scanned eleven days after insemination to target CL size and vascularity. All data was compared with fertility by using gestational diagnosis 35 days after ovulation. Prior to ET, CL vascularity showed a positive correlation with fertility, and no pregnancy occurred in animals with less than 40% of CL vascularity. Prior to AI and also eleven days after AI, no relationship with fertility was seen in all parameters analyzed (follicle and CL size and vascularity), and contrary, cows with CL vascularity greater than 70% exhibit lower fertility. In inseminated animals, follicle size and vascularity was positive related with CL size and vascularity, as shown by the presence of greater CL size and vascularity originated from follicle with also greater size and vascularity. This is the first time that ultrasound based fertility prediction methods were tested prior to ET and AI and showed an application in ET, but not in AI programs. Further studies are needed including hormone profile evaluation to improve conclusion.
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.
Resumo:
An article written by Dorothy Rungeling about her experience flying a helicopter for the first time. She is instructed by Bert Ratliff of the Bell Helicopter Corp. in a Bell G2 Trooper.
Resumo:
As exploration of our solar system and outerspace move into the future, spacecraft are being developed to venture on increasingly challenging missions with bold objectives. The spacecraft tasked with completing these missions are becoming progressively more complex. This increases the potential for mission failure due to hardware malfunctions and unexpected spacecraft behavior. A solution to this problem lies in the development of an advanced fault management system. Fault management enables spacecraft to respond to failures and take repair actions so that it may continue its mission. The two main approaches developed for spacecraft fault management have been rule-based and model-based systems. Rules map sensor information to system behaviors, thus achieving fast response times, and making the actions of the fault management system explicit. These rules are developed by having a human reason through the interactions between spacecraft components. This process is limited by the number of interactions a human can reason about correctly. In the model-based approach, the human provides component models, and the fault management system reasons automatically about system wide interactions and complex fault combinations. This approach improves correctness, and makes explicit the underlying system models, whereas these are implicit in the rule-based approach. We propose a fault detection engine, Compiled Mode Estimation (CME) that unifies the strengths of the rule-based and model-based approaches. CME uses a compiled model to determine spacecraft behavior more accurately. Reasoning related to fault detection is compiled in an off-line process into a set of concurrent, localized diagnostic rules. These are then combined on-line along with sensor information to reconstruct the diagnosis of the system. These rules enable a human to inspect the diagnostic consequences of CME. Additionally, CME is capable of reasoning through component interactions automatically and still provide fast and correct responses. The implementation of this engine has been tested against the NEAR spacecraft advanced rule-based system, resulting in detection of failures beyond that of the rules. This evolution in fault detection will enable future missions to explore the furthest reaches of the solar system without the burden of human intervention to repair failed components.
Resumo:
We consider an online learning scenario in which the learner can make predictions on the basis of a fixed set of experts. The performance of each expert may change over time in a manner unknown to the learner. We formulate a class of universal learning algorithms for this problem by expressing them as simple Bayesian algorithms operating on models analogous to Hidden Markov Models (HMMs). We derive a new performance bound for such algorithms which is considerably simpler than existing bounds. The bound provides the basis for learning the rate at which the identity of the optimal expert switches over time. We find an analytic expression for the a priori resolution at which we need to learn the rate parameter. We extend our scalar switching-rate result to models of the switching-rate that are governed by a matrix of parameters, i.e. arbitrary homogeneous HMMs. We apply and examine our algorithm in the context of the problem of energy management in wireless networks. We analyze the new results in the framework of Information Theory.
Resumo:
In this report, a face recognition system that is capable of detecting and recognizing frontal and rotated faces was developed. Two face recognition methods focusing on the aspect of pose invariance are presented and evaluated - the whole face approach and the component-based approach. The main challenge of this project is to develop a system that is able to identify faces under different viewing angles in realtime. The development of such a system will enhance the capability and robustness of current face recognition technology. The whole-face approach recognizes faces by classifying a single feature vector consisting of the gray values of the whole face image. The component-based approach first locates the facial components and extracts them. These components are normalized and combined into a single feature vector for classification. The Support Vector Machine (SVM) is used as the classifier for both approaches. Extensive tests with respect to the robustness against pose changes are performed on a database that includes faces rotated up to about 40 degrees in depth. The component-based approach clearly outperforms the whole-face approach on all tests. Although this approach isproven to be more reliable, it is still too slow for real-time applications. That is the reason why a real-time face recognition system using the whole-face approach is implemented to recognize people in color video sequences.
Resumo:
This paper describes a trainable system capable of tracking faces and facialsfeatures like eyes and nostrils and estimating basic mouth features such as sdegrees of openness and smile in real time. In developing this system, we have addressed the twin issues of image representation and algorithms for learning. We have used the invariance properties of image representations based on Haar wavelets to robustly capture various facial features. Similarly, unlike previous approaches this system is entirely trained using examples and does not rely on a priori (hand-crafted) models of facial features based on optical flow or facial musculature. The system works in several stages that begin with face detection, followed by localization of facial features and estimation of mouth parameters. Each of these stages is formulated as a problem in supervised learning from examples. We apply the new and robust technique of support vector machines (SVM) for classification in the stage of skin segmentation, face detection and eye detection. Estimation of mouth parameters is modeled as a regression from a sparse subset of coefficients (basis functions) of an overcomplete dictionary of Haar wavelets.
Resumo:
Testing constraints for real-time systems are usually verified through the satisfiability of propositional formulae. In this paper, we propose an alternative where the verification of timing constraints can be done by counting the number of truth assignments instead of boolean satisfiability. This number can also tell us how “far away” is a given specification from satisfying its safety assertion. Furthermore, specifications and safety assertions are often modified in an incremental fashion, where problematic bugs are fixed one at a time. To support this development, we propose an incremental algorithm for counting satisfiability. Our proposed incremental algorithm is optimal as no unnecessary nodes are created during each counting. This works for the class of path RTL. To illustrate this application, we show how incremental satisfiability counting can be applied to a well-known rail-road crossing example, particularly when its specification is still being refined.
Resumo:
The proposal presented in this thesis is to provide designers of knowledge based supervisory systems of dynamic systems with a framework to facilitate their tasks avoiding interface problems among tools, data flow and management. The approach is thought to be useful to both control and process engineers in assisting their tasks. The use of AI technologies to diagnose and perform control loops and, of course, assist process supervisory tasks such as fault detection and diagnose, are in the scope of this work. Special effort has been put in integration of tools for assisting expert supervisory systems design. With this aim the experience of Computer Aided Control Systems Design (CACSD) frameworks have been analysed and used to design a Computer Aided Supervisory Systems (CASSD) framework. In this sense, some basic facilities are required to be available in this proposed framework: ·
Resumo:
We discuss the implementation of a method of solving initial boundary value problems in the case of integrable evolution equations in a time-dependent domain. This method is applied to a dispersive linear evolution equation with spatial derivatives of arbitrary order and to the defocusing nonlinear Schrödinger equation, in the domain l(t)
Resumo:
A total of 86 profiles from meat and egg strains of chickens (male and female) were used in this study. Different flexible growth functions were evaluated with regard to their ability to describe the relationship between live weight and age and were compared with the Gompertz and logistic equations, which have a fixed point of inflection. Six growth functions were used: Gompertz, logistic, Lopez, Richards, France, and von Bertalanffy. A comparative analysis was carried out based on model behavior and statistical performance. The results of this study confirmed the initial concern about the limitation of a fixed point of inflection, such as in the Gompertz equation. Therefore, consideration of flexible growth functions as an alternatives to the simpler equations (with a fixed point of inflection) for describing the relationship between live weight and age are recommended for the following reasons: they are easy to fit, they very often give a closer fit to data points because of their flexibility and therefore a smaller RSS value, than the simpler models, and they encompasses simpler models for the addition of an extra parameter, which is especially important when the behavior of a particular data set is not defined previously.
Resumo:
The successful implementation of just-in-time (JIT) purchasing policy in many industries has prompted many companies that still use the economic order quantity (EOQ) purchasing policy to ponder if they should switch to the JIT purchasing policy. Despite existing studies that directly compare the costs between the EOQ and JIT purchasing systems, this decision is, however, still difficult to be made, especially when price discount has to be considered. JIT purchasing may not always be successful even though plants that adopted JIT operations have experienced or can take advantage of physical space reduction. Hence, the objective of this study is to expand on a classical EOQ with a price discount model to derive the EOQ–JIT cost indifference point. The objective was tested and achieved through a survey and case study conducted in the ready-mixed concrete industry in Singapore.