1000 resultados para Dynamic Calibration
Resumo:
An accurate sense of time contributes to functions ranging from the perception and anticipation of sensory events to the production of coordinated movements. However, accumulating evidence demonstrates that time perception is subject to strong illusory distortion. In two experiments, we investigated whether the subjective speed of temporal perception is dependent on our visual environment. By presenting human observers with speed-altered movies of a crowded street scene, we modulated performance on subsequent production of "20s" elapsed intervals. Our results indicate that one's visual environment significantly contributes to calibrating our sense of time, independently of any modulation of arousal. This plasticity generates an assay for the integrity of our sense of time and its rehabilitation in clinical pathologies.
Resumo:
Received signal strength-based localization systems usually rely on a calibration process that aims at characterizing the propagation channel. However, due to the changing environmental dynamics, the behavior of the channel may change after some time, thus, recalibration processes are necessary to maintain the positioning accuracy. This paper proposes a dynamic calibration method to initially calibrate and subsequently update the parameters of the propagation channel model using a Least Mean Squares approach. The method assumes that each anchor node in the localization infrastructure is characterized by its own propagation channel model. In practice, a set of sniffers is used to collect RSS samples, which will be used to automatically calibrate each channel model by iteratively minimizing the positioning error. The proposed method is validated through numerical simulation, showing that the positioning error of the mobile nodes is effectively reduced. Furthermore, the method has a very low computational cost; therefore it can be used in real-time operation for wireless resource-constrained nodes.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Purpose: Although manufacturers of bicycle power monitoring devices SRM and Power Tap (PT) claim accuracy to within 2.5%, there are limited scientific data available in support. The purpose of this investigation was to assess the accuracy of SRM and PT under different conditions. Methods: First, 19 SRM were calibrated, raced for 11 months, and retested using a dynamic CALRIG (50-1000 W at 100 rpm). Second, using the same procedure, five PT were repeat tested on alternate days. Third, the most accurate SRM and PT were tested for the influence of cadence (60, 80, 100, 120 rpm), temperature (8 and 21degreesC) and time (1 h at similar to300 W) on accuracy. Finally, the same SRM and PT were downloaded and compared after random cadence and gear surges using the CALRIG and on a training ride. Results: The mean error scores for SRM and PT factory calibration over a range of 50-1000 W were 2.3 +/- 4.9% and -2.5 +/- 0.5%, respectively. A second set of trials provided stable results for 15 calibrated SRM after 11 months (-0.8 +/- 1.7%), and follow-up testing of all PT units confirmed these findings (-2.7 +/- 0.1%). Accuracy for SRM and PT was not largely influenced by time and cadence; however. power output readings were noticeably influenced by temperature (5.2% for SRM and 8.4% for PT). During field trials, SRM average and max power were 4.8% and 7.3% lower, respectively, compared with PT. Conclusions: When operated according to manufacturers instructions, both SRM and PT offer the coach, athlete, and sport scientist the ability to accurately monitor power output in the lab and the field. Calibration procedures matching performance tests (duration, power, cadence, and temperature) are, however, advised as the error associated with each unit may vary.
Resumo:
Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013
Resumo:
Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.
Resumo:
The problem of dynamic camera calibration considering moving objects in close range environments using straight lines as references is addressed. A mathematical model for the correspondence of a straight line in the object and image spaces is discussed. This model is based on the equivalence between the vector normal to the interpretation plane in the image space and the vector normal to the rotated interpretation plane in the object space. In order to solve the dynamic camera calibration, Kalman Filtering is applied; an iterative process based on the recursive property of the Kalman Filter is defined, using the sequentially estimated camera orientation parameters to feedback the feature extraction process in the image. For the dynamic case, e.g. an image sequence of a moving object, a state prediction and a covariance matrix for the next instant is obtained using the available estimates and the system model. Filtered state estimates can be computed from these predicted estimates using the Kalman Filtering approach and based on the system model parameters with good quality, for each instant of an image sequence. The proposed approach was tested with simulated and real data. Experiments with real data were carried out in a controlled environment, considering a sequence of images of a moving cube in a linear trajectory over a flat surface.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
Current nanometer technologies suffer within-die parameter uncertainties, varying workload conditions, aging, and temperature effects that cause a serious reduction on yield and performance. In this scenario, monitoring, calibration, and dynamic adaptation become essential, demanding systems with a collection of multi purpose monitors and exposing the need for light-weight monitoring networks. This paper presents a new monitoring network paradigm able to perform an early prioritization of the information. This is achieved by the introduction of a new hierarchy level, the threshing level. Targeting it, we propose a time-domain signaling scheme over a single-wire that minimizes the network switching activity as well as the routing requirements. To validate our approach, we make a thorough analysis of the architectural trade-offs and expose two complete monitoring systems that suppose an area improvement of 40% and a power reduction of three orders of magnitude compared to previous works.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs) that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2) were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax) used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR), and data mining techniques as the Classification And Regression Tree (CART) and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP) reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga
Resumo:
Highly redundant or statically undetermined structures, such as a cable-stayed bridge, have been of particular concern to the engineering community nowadays because of the complex parameters that must be taken into account for healthy monitoring. The purpose of this study was to verify the reliability and practicability of using GPS to characterize dynamic oscillations of small span bridges. The test was carried out on a cable-stayed wood footbridge at Escola de Engenharia de Sao Carlos-Universidade de Sao Paulo, Brazil. Initially a static load trial was carried out to get an idea of the deck amplitude and oscillation frequency. After that, a calibration trial was carried out by applying a well known oscillation on the rover antenna to check the environment detectable limits for the method used. Finally, a dynamic load trial was carried out by using GPS and a displacement transducer to measure the deck oscillation. The displacement transducer was used just to confirm the results obtained by the GPS. The results have shown that the frequencies and amplitude displacements obtained by the GPS are in good agreement with the displacement transducer responses. GPS can be used as a reliable tool to characterize the dynamic behavior of large structures such as cable-stayed footbridges undergoing dynamic loads.
Resumo:
The use of computational fluid dynamics simulations for calibrating a flush air data system is described, In particular, the flush air data system of the HYFLEX hypersonic vehicle is used as a case study. The HYFLEX air data system consists of nine pressure ports located flush with the vehicle nose surface, connected to onboard pressure transducers, After appropriate processing, surface pressure measurements can he converted into useful air data parameters. The processing algorithm requires an accurate pressure model, which relates air data parameters to the measured pressures. In the past, such pressure models have been calibrated using combinations of flight data, ground-based experimental results, and numerical simulation. We perform a calibration of the HYFLEX flush air data system using computational fluid dynamics simulations exclusively, The simulations are used to build an empirical pressure model that accurately describes the HYFLEX nose pressure distribution ol cr a range of flight conditions. We believe that computational fluid dynamics provides a quick and inexpensive way to calibrate the air data system and is applicable to a broad range of flight conditions, When tested with HYFLEX flight data, the calibrated system is found to work well. It predicts vehicle angle of attack and angle of sideslip to accuracy levels that generally satisfy flight control requirements. Dynamic pressure is predicted to within the resolution of the onboard inertial measurement unit. We find that wind-tunnel experiments and flight data are not necessary to accurately calibrate the HYFLEX flush air data system for hypersonic flight.
Resumo:
The Our Lady of Conception church is located in village of Monforte (Portugal) and is not in use nowadays. The church presents structural damage and, consequently, a study was carried out. The study involved the survey of the damage, dynamic identification tests under ambient vibration and the numerical analysis. The church is constituted by the central nave, the chancel, the sacristy and the corridor to access the pulpit. The masonry walls present different thickness, namely 0.65 m in the chancel, 0.70 m in the sacristy, 0.92 in the central nave and 0.65 m in the corridor. The masonry walls present 8 buttresses with different dimensions. The total longitudinal and transversal dimensions of the church are equal to 21.10 m and 14.26 m, respectively. The survey of the damage showed that, in general, the masonry walls are in good conditions, with exception of the transversal walls of the nave, which present severe cracks. The arches of the vault presents also severe cracks along the central nave. As consequence, the infiltrations have increased the degradation of the vault and paintings. Furthermore, the foundations present settlements in the Southwest direction. The dynamic identification test were carried out under the action of ambient excitation of the wind and using 12 piezoelectric accelerometers of high sensitivity. The dynamic identification tests allowed to estimate the dynamic properties of the church, namely frequencies, mode shapes and damping ratios. A FEM numerical model was prepared and calibrated, based on the first four experimental modes estimated in the dynamic identification tests. The average error between the experimental and numerical frequencies of the first four modes is equal to 5%. After calibration of the numerical model, pushover analyses with a load pattern proportional to the mass, in the transversal and longitudinal direction of the church, were performed. The results of the analysis numerical allow to conclude that the most vulnerable direction of the church is in the transversal one and the maximum load factor is equal to 0.35.