746 resultados para Machine Vision
Resumo:
Analogue computers provide actual rather than virtual representations of model systems. They are powerful and engaging computing machines that are cheap and simple to build. This two-part Retronics article helps you build (and understand!) your own analogue computer to simulate the Lorenz butterfly that's become iconic for Chaos theory.
Resumo:
Lean construction is considered from a human resource management (HRM) perspective. It is contended that the UK construction sector is characterised by an institutionalised regressive approach to HRM. In the face of rapidly declining recruitment rates for built environment courses, the dominant HRM philosophy of utilitarian instrumentalism does little to attract the intelligent and creative young people that the industry so badly needs. Given this broader context, there is a danger that an uncritical acceptance of lean construction will exacerbate the industry's reputation for unrewarding jobs. Construction academics have strangely ignored the extensive literature that equates lean production to a HRM regime of control, exploitation and surveillance. The emphasis of lean thinking on eliminating waste and improving efficiency makes it easy to absorb into the best practice agenda because it conforms to the existing dominant way of thinking. 'Best practice' is seemingly judged by the extent to which it serves the interests of the industry's technocratic elite. Hence it acts as a conservative force in favour of maintaining the status quo. In this respect, lean construction is the latest manifestation of a long established trend. In common with countless other improvement initiatives, the rhetoric is heavy in the machine metaphor whilst exhorting others to be more efficient. If current trends in lean construction are extrapolated into the future the ultimate destination may be uncomfortably close to Aldous Huxley's apocalyptic vision of a Brave New World. In the face of these trends, the lean construction research community pleads neutrality whilst confining its attention to the rational high ground. The future of lean construction is not yet predetermined. Many choices remain to be made. The challenge for the research community is to improve practice whilst avoiding the dehumanising tendencies of high utilitarianism.
Resumo:
Optical characteristics of stirred curd were simultaneously monitored during syneresis in a 10-L cheese vat using computer vision and colorimetric measurements. Curd syneresis kinetic conditions were varied using 2 levels of milk pH (6.0 and 6.5) and 2 agitation speeds (12.1 and 27.2 rpm). Measured optical parameters were compared with gravimetric measurements of syneresis, taken simultaneously. The results showed that computer vision and colorimeter measurements have potential for monitoring syneresis. The 2 different phases, curd and whey, were distinguished by means of color differences. As syneresis progressed, the backscattered light became increasingly yellow in hue for circa 20 min for the higher stirring speed and circa 30 min for the lower stirring speed. Syneresis-related gravimetric measurements of importance to cheese making (e.g., curd moisture content, total solids in whey, and yield of whey) correlated significantly with computer vision and colorimetric measurements..
Resumo:
The meltabilities of 14 process cheese samples were determined at 2 and 4 weeks after manufacture using sensory analysis, a computer vision method, and the Olson and Price test. Sensory analysis meltability correlated with both computer vision meltability (R-2 = 0.71, P < 0.001) and Olson and Price meltability (R-2 = 0.69, P < 0.001). There was a marked lack of correlation between the computer vision method and the Olson and Price test. This study showed that the Olson and Price test gave greater repeatability than the computer vision method. Results showed process cheese meltability decreased with increasing inorganic salt content and with lower moisture/fat ratios. There was very little evidence in this study to show that process cheese meltability changed between 2 and 4 weeks after manufacture..
Resumo:
Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.
Resumo:
Automatically extracting interesting objects from videos is a very challenging task and is applicable to many research areas such robotics, medical imaging, content based indexing and visual surveillance. Automated visual surveillance is a major research area in computational vision and a commonly applied technique in an attempt to extract objects of interest is that of motion segmentation. Motion segmentation relies on the temporal changes that occur in video sequences to detect objects, but as a technique it presents many challenges that researchers have yet to surmount. Changes in real-time video sequences not only include interesting objects, environmental conditions such as wind, cloud cover, rain and snow may be present, in addition to rapid lighting changes, poor footage quality, moving shadows and reflections. The list provides only a sample of the challenges present. This thesis explores the use of motion segmentation as part of a computational vision system and provides solutions for a practical, generic approach with robust performance, using current neuro-biological, physiological and psychological research in primate vision as inspiration.
Resumo:
Letter identification is a critical front end of the reading process. In general, conceptualizations of the identification process have emphasized arbitrary sets of distinctive features. However, a richer view of letter processing incorporates principles from the field of type design, including an emphasis on uniformities across letters within a font. The importance of uniformities is supported by a small body of research indicating that consistency of font increases letter identification efficiency. We review design concepts and the relevant literature, with the goal of stimulating further thinking about letter processing during reading.