228 resultados para Low Speed Switched Reluctance Machine
Resumo:
In this article we present an alternative theoretical perspective on contemporary cultural, political and economic practices in advanced countries. Like other articles in this issue of parallax, our focus is on conceptualising the economies of excess. However, our ideas do not draw on the writings of Georges Bataille in The Accursed Share, but principally on Virilio’s Speed & Politics: An Essay on Dromology and Marx’s Capital and the Grundrisse.4 Using a modest synthesis of tools provided by these theorists, we put forward a tentative conceptualisation of ‘dromoeconomics’, or, a political economy of speed.
Resumo:
The portability and runtime safety of programs which are executed on the Java Virtual Machine (JVM) makes the JVM an attractive target for compilers of languages other than Java. Unfortunately, the JVM was designed with language Java in mind, and lacks many of the primitives required for a straighforward implementation of other languages. Here, we discuss how the JVM may be used to implement other object-oriented languages. As a practical example of the possibilities, we report on a comprehensive case study. The open source Gardens Point Component Pascal compiler compiles the entire Component Pascal language, a dialect of Oberon-2, to JVM bytecodes. This compiler achieves runtime efficiencies which are comparable to native-code implementations of procedural languages.
Resumo:
The portability and runtime safety of programs which are executed on the Java Virtual Machine (JVM) makes the JVM an attractive target for compilers of languages other than Java. Unfortunately, the JVM was designed with language Java in mind, and lacks many of the primitives required for a straight forward implementation of other languages. Here, we discuss how the JVM may be used to implement other object oriented languages. As a practical example of the possibilities, we report on a comprehensive case study. The open source Gardens Point Component Pascal compiler compiles the entire Component Pascal language, a dialect of Oberon 2, to JVM bytecodes. This compiler achieves runtime efficiencies which are comparable to native code implementations of procedural languages.
Resumo:
The care of low-vision patients is termed vision rehabilitation, and optometrists have an essential role to play in the provision of vision rehabilitation services. Ideally, if patients stay with one optometrist or practice, their low-vision care becomes part of a continuum of eye care, from the time when they had normal vision. If progressive vision loss occurs, the role of the optometrist changes from primary eye care only to one of monitoring vision loss and gradually introducing low-vision care, especially magnification and advice on lighting and contrast, in conjunction with other vision rehabilitation professionals.
Resumo:
Modern machines are complex and often required to operate long hours to achieve production targets. The ability to detect symptoms of failure, hence, forecasting the remaining useful life of the machine is vital to prevent catastrophic failures. This is essential to reducing maintenance cost, operation downtime and safety hazard. Recent advances in condition monitoring technologies have given rise to a number of prognosis models that attempt to forecast machinery health based on either condition data or reliability data. In practice, failure condition trending data are seldom kept by industries and data that ended with a suspension are sometimes treated as failure data. This paper presents a novel approach of incorporating historical failure data and suspended condition trending data in the prognostic model. The proposed model consists of a FFNN whose training targets are asset survival probabilities estimated using a variation of Kaplan-Meier estimator and degradation-based failure PDF estimator. The output survival probabilities collectively form an estimated survival curve. The viability of the model was tested using a set of industry vibration data.
Resumo:
Objectives. To evaluate the performance of the dynamic-area high-speed videokeratoscopy technique in the assessment of tear film surface quality with and without the presence of soft contact lenses on eye. Methods. Retrospective data from a tear film study using basic high-speed videokeratoscopy, captured at 25 frames per second, (Kopf et al., 2008, J Optom) were used. Eleven subjects had tear film analysis conducted in the morning, midday and evening on the first and seventh day of one week of no lens wear. Five of the eleven subjects then completed an extra week of hydrogel lens wear followed by a week of silicone hydrogel lens wear. Analysis was performed on a 6 second period of the inter-blink recording. The dynamic-area high-speed videokeratoscopy technique uses the maximum available area of Placido ring pattern reflected from the tear interface and eliminates regions of disturbance due to shadows from the eyelashes. A value of tear film surface quality was derived using image rocessing techniques, based on the quality of the reflected ring pattern orientation. Results. The group mean tear film surface quality and the standard deviations for each of the conditions (bare eye, hydrogel lens, and silicone hydrogel lens) showed a much lower coefficient of variation than previous methods (average reduction of about 92%). Bare eye measurements from the right and left eyes of eleven individuals showed high correlation values (Pearson’s correlation r = 0.73, p < 0.05). Repeated measures ANOVA across the 6 second period of measurement in the normal inter-blink period for the bare eye condition showed no statistically significant changes. However, across the 6 second inter-blink period with both contact lenses, statistically significant changes were observed (p < 0.001) for both types of contact lens material. Overall, wearing hydrogel and silicone hydrogel lenses caused the tear film surface quality to worsen compared with the bare eye condition (repeated measures ANOVA, p < 0.0001 for both hydrogel and silicone hydrogel). Conclusions. The results suggest that the dynamic-area method of high-speed videokeratoscopy was able to distinguish and quantify the subtle, but systematic worsening of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions.
Resumo:
A new method for noninvasive assessment of tear film surface quality (TFSQ) is proposed. The method is based on high-speed videokeratoscopy in which the corneal area for the analysis is dynamically estimated in a manner that removes videokeratoscopy interference from the shadows of eyelashes but not that related to the poor quality of the precorneal tear film that is of interest. The separation between the two types of seemingly similar videokeratoscopy interference is achieved by region-based classification in which the overall noise is first separated from the useful signal (unaltered videokeratoscopy pattern), followed by a dedicated interference classification algorithm that distinguishes between the two considered interferences. The proposed technique provides a much wider corneal area for the analysis of TFSQ than the previously reported techniques. A preliminary study with the proposed technique, carried out for a range of anterior eye conditions, showed an effective behavior in terms of noise to signal separation, interference classification, as well as consistent TFSQ results. Subsequently, the method proved to be able to not only discriminate between the bare eye and the lens on eye conditions but also to have the potential to discriminate between the two types of contact lenses.
Resumo:
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
Resumo:
We describe the design and evaluation of a platform for networks of cameras in low-bandwidth, low-power sensor networks. In our work to date we have investigated two different DSP hardware/software platforms for undertaking the tasks of compression and object detection and tracking. We compare the relative merits of each of the hardware and software platforms in terms of both performance and energy consumption. Finally we discuss what we believe are the ongoing research questions for image processing in WSNs.