996 resultados para automated meter reading (AMR)
Measurement of acceleration while walking as an automated method for gait assessment in dairy cattle
Resumo:
The aims were to determine whether measures of acceleration of the legs and back of dairy cows while they walk could help detect changes in gait or locomotion associated with lameness and differences in the walking surface. In 2 experiments, 12 or 24 multiparous dairy cows were fitted with five 3-dimensional accelerometers, 1 attached to each leg and 1 to the back, and acceleration data were collected while cows walked in a straight line on concrete (experiment 1) or on both concrete and rubber (experiment 2). Cows were video-recorded while walking to assess overall gait, asymmetry of the steps, and walking speed. In experiment 1, cows were selected to maximize the range of gait scores, whereas no clinically lame cows were enrolled in experiment 2. For each accelerometer location, overall acceleration was calculated as the magnitude of the 3-dimensional acceleration vector and the variance of overall acceleration, as well as the asymmetry of variance of acceleration within the front and rear pair of legs. In experiment 1, the asymmetry of variance of acceleration in the front and rear legs was positively correlated with overall gait and the visually assessed asymmetry of the steps (r ≥0.6). Walking speed was negatively correlated with the asymmetry of variance of the rear legs (r=−0.8) and positively correlated with the acceleration and the variance of acceleration of each leg and back (r ≥0.7). In experiment 2, cows had lower gait scores [2.3 vs. 2.6; standard error of the difference (SED)=0.1, measured on a 5-point scale] and lower scores for asymmetry of the steps (18.0 vs. 23.1; SED=2.2, measured on a continuous 100-unit scale) when they walked on rubber compared with concrete, and their walking speed increased (1.28 vs. 1.22m/s; SED=0.02). The acceleration of the front (1.67 vs. 1.72g; SED=0.02) and rear (1.62 vs. 1.67g; SED=0.02) legs and the variance of acceleration of the rear legs (0.88 vs. 0.94g; SED=0.03) were lower when cows walked on rubber compared with concrete. Despite the improvements in gait score that occurred when cows walked on rubber, the asymmetry of variance of acceleration of the front leg was higher (15.2 vs. 10.4%; SED=2.0). The difference in walking speed between concrete and rubber correlated with the difference in the mean acceleration and the difference in the variance of acceleration of the legs and back (r ≥0.6). Three-dimensional accelerometers seem to be a promising tool for lameness detection on farm and to study walking surfaces, especially when attached to a leg.
Resumo:
The object of this work is Hegel's Logic, which comprises the first third of his philosophical System that also includes the Philosophy of Nature and the Philosophy of Spirit. The work is divided into two parts, where the first part investigates Hegel s Logic in itself or without an explicit reference to rest of Hegel's System. It is argued in the first part that Hegel's Logic contains a methodology for constructing examples of basic ontological categories. The starting point on which this construction is based is a structure Hegel calls Nothing, which I argue to be identical with an empty situation, that is, a situation with no objects in it. Examples of further categories are constructed, firstly, by making previous structures objects of new situations. This rule makes it possible for Hegel to introduce examples of ontological structures that contain objects as constituents. Secondly, Hegel takes also the very constructions he uses as constituents of further structures: thus, he is able to exemplify ontological categories involving causal relations. The final result of Hegel's Logic should then be a model of Hegel s Logic itself, or at least of its basic methods. The second part of the work focuses on the relation of Hegel's Logic to the other parts of Hegel's System. My interpretation tries to avoid, firstly, the extreme of taking Hegel's System as a grand metaphysical attempt to deduce what exists through abstract thinking, and secondly, the extreme of seeing Hegel's System as mere diluted Kantianism or a second-order investigation of theories concerning objects instead of actual objects. I suggest a third manner of reading Hegel's System, based on extending the constructivism of Hegel's Logic to the whole of his philosophical System. According to this interpretation, transitions between parts of Hegel's System should not be understood as proofs of any sort, but as constructions of one structure or its model from another structure. Hence, these transitions involve at least, and especially within the Philosophy of Nature, modelling of one type of object or phenomenon through characteristics of an object or phenomenon of another type, and in the best case, and especially within the Philosophy of Spirit, transformations of an object or phenomenon of one type into an object or phenomenon of another type. Thus, the transitions and descriptions within Hegel's System concern actual objects and not mere theories, but they still involve no fallacious deductions.
Resumo:
Research on reading has been successful in revealing how attention guides eye movements when people read single sentences or text paragraphs in simplified and strictly controlled experimental conditions. However, less is known about reading processes in more naturalistic and applied settings, such as reading Web pages. This thesis investigates online reading processes by recording participants eye movements. The thesis consists of four experimental studies that examine how location of stimuli presented outside the currently fixated region (Study I and III), text format (Study II), animation and abrupt onset of online advertisements (Study III), and phase of an online information search task (Study IV) affect written language processing. Furthermore, the studies investigate how the goal of the reading task affects attention allocation during reading by comparing reading for comprehension with free browsing, and by varying the difficulty of an information search task. The results show that text format affects the reading process, that is, vertical text (word/line) is read at a slower rate than a standard horizontal text, and the mean fixation durations are longer for vertical text than for horizontal text. Furthermore, animated online ads and abrupt ad onsets capture online readers attention and direct their gaze toward the ads, and distract the reading process. Compared to a reading-for-comprehension task, online ads are attended to more in a free browsing task. Moreover, in both tasks abrupt ad onsets result in rather immediate fixations toward the ads. This effect is enhanced when the ad is presented in the proximity of the text being read. In addition, the reading processes vary when Web users proceed in online information search tasks, for example when they are searching for a specific keyword, looking for an answer to a question, or trying to find a subjectively most interesting topic. A scanning type of behavior is typical at the beginning of the tasks, after which participants tend to switch to a more careful reading state before finishing the tasks in the states referred to as decision states. Furthermore, the results also provided evidence that left-to-right readers extract more parafoveal information to the right of the fixated word than to the left, suggesting that learning biases attentional orienting towards the reading direction.
Resumo:
We describe an automated calorimeter for measurement of specific heat in the temperature range 10 K>T>0.5 K. It uses sample of moderate size (100–1000 mg), has a moderate precision and accuracy (2%–5%), is easy to operate and the measurements can be done quickly with He4 economy. The accuracy of this calorimeter was checked by measurement of specific heat of copper and that of aluminium near its superconducting transition temperature.
Resumo:
A fully automated, versatile Temperature Programmed Desorption (TDP), Temperature Programmed Reaction (TPR) and Evolved Gas Analysis (EGA) system has been designed and fabricated. The system consists of a micro-reactor which can be evacuated to 10−6 torr and can be heated from 30 to 750°C at a rate of 5 to 30°C per minute. The gas evolved from the reactor is analysed by a quadrupole mass spectrometer (1–300 amu). Data on each of the mass scans and the temperature at a given time are acquired by a PC/AT system to generate thermograms. The functioning of the system is exemplified by the temperature programmed desorption (TPD) of oxygen from YBa2Cu3−xCoxO7 ± δ, catalytic ammonia oxidation to NO over YBa2Cu3O7−δ and anaerobic oxidation of methanol to CO2, CO and H2O over YBa2Cu3O7−δ (Y123) and PrBa2Cu3O7−δ (Pr123) systems.
Resumo:
We present a framework for performance evaluation of manufacturing systems subject to failure and repair. In particular, we determine the mean and variance of accumulated production over a specified time frame and show the usefulness of these results in system design and in evaluating operational policies for manufacturing systems. We extend this analysis for lead time as well. A detailed performability study is carried out for the generic model of a manufacturing system with centralized material handling. Several numerical results are presented, and the relevance of performability analysis in resolving system design issues is highlighted. Specific problems addressed include computing the distribution of total production over a shift period, determining the shift length necessary to deliver a given production target with a desired probability, and obtaining the distribution of Manufacturing Lead Time, all in the face of potential subsystem failures.
Resumo:
This paper addresses the problem of automated multiagent search in an unknown environment. Autonomous agents equipped with sensors carry out a search operation in a search space, where the uncertainty, or lack of information about the environment, is known a priori as an uncertainty density distribution function. The agents are deployed in the search space to maximize single step search effectiveness. The centroidal Voronoi configuration, which achieves a locally optimal deployment, forms the basis for the proposed sequential deploy and search strategy. It is shown that with the proposed control law the agent trajectories converge in a globally asymptotic manner to the centroidal Voronoi configuration. Simulation experiments are provided to validate the strategy. Note to Practitioners-In this paper, searching an unknown region to gather information about it is modeled as a problem of using search as a means of reducing information uncertainty about the region. Moreover, multiple automated searchers or agents are used to carry out this operation optimally. This problem has many applications in search and surveillance operations using several autonomous UAVs or mobile robots. The concept of agents converging to the centroid of their Voronoi cells, weighted with the uncertainty density, is used to design a search strategy named as sequential deploy and search. Finally, the performance of the strategy is validated using simulations.
Resumo:
The theory, design, and performance of a solid electrolyte twin thermocell for the direct determination of the partial molar entropy of oxygen in a single-phase or multiphase mixture are described. The difference between the Seebeck coefficients of the concentric thermocells is directly related to the difference in the partial molar entropy of oxygen in the electrodes of each thermocell. The measured potentials are sensitive to small deviations from equilibrium at the electrodes. Small electric disturbances caused by simultaneous potential measurements or oxygen fluxes caused by large oxygen potential gradients between the electrodes also disturb the thermoelectric potential. An accuracy of ±0.5 calth K−1 mol−1 has been obtained by this method for the entropies of formation of NiO and NiAl2O4. This “entropy meter” may be used for the measurement of the entropies of formation of simple or complex oxides with significant residual contributions which cannot be detected by heat-capacity measurements.
Resumo:
In this paper, we propose an approach, using Coloured Petri Nets (CPN) for modelling flexible manufacturing systems. We illustrate our methodology for a Flexible Manufacturing Cell (FMC) with three machines and three robots. We also consider the analysis of the FMC for deadlocks using the invariant analysis of CPNs.
Resumo:
We propose a new paradigm for displaying comments: showing comments alongside parts of the article they correspond to. We evaluate the effectiveness of various approaches for this task and show that a combination of bag of words and topic models performs the best.
Resumo:
Western Blot analysis is an analytical technique used in Molecular Biology, Biochemistry, Immunogenetics and other Molecular Biology studies to separate proteins by electrophoresis. The procedure results in images containing nearly rectangular-shaped blots. In this paper, we address the problem of quantitation of the blots using automated image processing techniques. We formulate a special active contour (or snake) called Oblong, which locks on to rectangular shaped objects. Oblongs depend on five free parameters, which is also the minimum number of parameters required for a unique characterization. Unlike many snake formulations, Oblongs do not require explicit gradient computations and therefore the optimization is carried out fast. The performance of Oblongs is assessed on synthesized data and Western Blot Analysis images.
Resumo:
Faraday-type electromagnetic flow meters are employed for measuring the flow rate of liquid sodium in fast breeder reactors. The calibration of such flow meters, owing to the required elaborative arrangements is rather difficult. On the other hand, theoretical approach requires solution of two coupled electromagnetic partial differential equation with profile of the flow and applied magnetic field as the inputs. This is also quite involved due to the 3D nature of the problem. Alternatively, Galerkin finite element method based numerical solution is suggested in the literature as an attractive option for the required calibration. Based on the same, a computer code in Matlab platform has been developed in this work with both 20 and 27 node brick elements. The boundary conditions are correctly defined and several intermediate validation exercises are carried out. Finally it is shown that the sensitivities predicted by the code for flow meters of four different dimensions agrees well with the results given by analytical expression, thereby providing strong validation. Sensitivity for higher flow rates, for which analytical approach does not exist, is shown to decrease with increase in flow velocity.
Resumo:
Purpose: Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. Methods: The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. Results: The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. Conclusions: The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time. (C) 2013 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4792459]
Resumo:
Adaptive Mesh Refinement is a method which dynamically varies the spatio-temporal resolution of localized mesh regions in numerical simulations, based on the strength of the solution features. In-situ visualization plays an important role for analyzing the time evolving characteristics of the domain structures. Continuous visualization of the output data for various timesteps results in a better study of the underlying domain and the model used for simulating the domain. In this paper, we develop strategies for continuous online visualization of time evolving data for AMR applications executed on GPUs. We reorder the meshes for computations on the GPU based on the users input related to the subdomain that he wants to visualize. This makes the data available for visualization at a faster rate. We then perform asynchronous executions of the visualization steps and fix-up operations on the CPUs while the GPU advances the solution. By performing experiments on Tesla S1070 and Fermi C2070 clusters, we found that our strategies result in 60% improvement in response time and 16% improvement in the rate of visualization of frames over the existing strategy of performing fix-ups and visualization at the end of the timesteps.