41 resultados para Human Computer Cryptography
Resumo:
Creative activities including arts are characteristic to humankind. Our understanding of creativity is limited, yet there is substantial research trying to mimic human creativity in artificial systems and in particular to produce systems that automatically evolve art appreciated by humans. We propose here to model human visual preference by a set of aesthetic measures identified through observation of human selection of images and then use these for automatic evolution of aesthetic images. © 2011 Springer-Verlag.
Resumo:
Computer integrated monitoring is a very large area in engineering where on-line, real time data acquisition with the aid of sensors is the solution to many problems in the manufacturing industry as opposed to the old data logging method by graphics analysis. The raw data which is collected this way however is useless in the absence of a proper computerized management system. The transfer of data between the management and the shop floor processes has been impossible in the past unless all the computers in the system were totally compatible with each other. This limits the efficiency of the systems because they get governed by the limitations of the computers. General Motors of U.S.A. have recently started research on a new standard called the Manufacturing Automation Protocol (MAP) which is expected to allow data transfer between different types of computers. This is still in early development stages and also is currently very expensive. This research programme shows how such a shop floor data acquisition system and a complete management system on entirely different computers can be integrated together to form a single system by achieving data transfer communications using a cheaper but a superior alternative to MAP. Standard communication character sets and hardware such as ASCII and UARTs have been used in this method but the technique is so powerful that totally incompatible computers are shown to run different programs (in different languages) simultaneously and yet receive data from each other and process in their own CPUs with no human intervention.
Resumo:
Changes in modern structural design have created a demand for products which are light but possess high strength. The objective is a reduction in fuel consumption and weight of materials to satisfy both economic and environmental criteria. Cold roll forming has the potential to fulfil this requirement. The bending process is controlled by the shape of the profile machined on the periphery of the rolls. A CNC lathe can machine complicated profiles to a high standard of precision, but the expertise of a numerical control programmer is required. A computer program was developed during this project, using the expert system concept, to calculate tool paths and consequently to expedite the procurement of the machine control tapes whilst removing the need for a skilled programmer. Codifying the expertise of a human and the encapsulation of knowledge within a computer memory, destroys the dependency on highly trained people whose services can be costly, inconsistent and unreliable. A successful cold roll forming operation, where the product is geometrically correct and free from visual defects, is not easy to attain. The geometry of the sheet after travelling through the rolling mill depends on the residual strains generated by the elastic-plastic deformation. Accurate evaluation of the residual strains can provide the basis for predicting the geometry of the section. A study of geometric and material non-linearity, yield criteria, material hardening and stress-strain relationships was undertaken in this research project. The finite element method was chosen to provide a mathematical model of the bending process and, to ensure an efficient manipulation of the large stiffness matrices, the frontal solution was applied. A series of experimental investigations provided data to compare with corresponding values obtained from the theoretical modelling. A computer simulation, capable of predicting that a design will be satisfactory prior to the manufacture of the rolls, would allow effort to be concentrated into devising an optimum design where costs are minimised.
Resumo:
Conventional methods of form-roll design and manufacture for Cold Roll-Forming of thin-walled metal sections have been entirely manual, time consuming and prone to errors, resulting in inefficiency and high production costs. With the use of computers, lead time can be significantly improved, particularly for those aspects involving routine but tedious human decisions and actions. This thesis describes the development of computer aided tools for producing form-roll designs for NC manufacture in the CAD/CAM environment. The work was undertaken to modernise the existing activity of a company manufacturing thin-walled sections. The investigated areas of the activity, including the design and drafting of the finished section, the flower patterns, the 10 to 1 templates, and the rolls complete with pinch-difference surfaces, side-rolls and extension-contours, have been successfully computerised by software development . Data generated by the developed software can be further processed for roll manufacturing using NC lathes. The software has been specially designed for portability to facilitate its implementation on different computers. The Opening-Radii method of forming was introduced as a subsitute to the conventional method for better forming. Most of the essential aspects in roll design have been successfully incorporated in the software. With computerisation, extensive standardisation in existing roll design practices and the use of more reliable and scientifically-based methods have been achieved. Satisfactory and beneficial results have also been obtained by the company in using the software through a terminal linked to the University by a GPO line. Both lead time and productivity in roll design and manufacture have been significantly improved. It is therefore concluded that computerisation in the design of form-rolls for automation by software development is viable. The work also demonstrated the promising nature of the CAD/CAM approach.
Resumo:
This thesis presents a study of how edges are detected and encoded by the human visual system. The study begins with theoretical work on the development of a model of edge processing, and includes psychophysical experiments on humans, and computer simulations of these experiments, using the model. The first chapter reviews the literature on edge processing in biological and machine vision, and introduces the mathematical foundations of this area of research. The second chapter gives a formal presentation of a model of edge perception that detects edges and characterizes their blur, contrast and orientation, using Gaussian derivative templates. This model has previously been shown to accurately predict human performance in blur matching tasks with several different types of edge profile. The model provides veridical estimates of the blur and contrast of edges that have a Gaussian integral profile. Since blur and contrast are independent parameters of Gaussian edges, the model predicts that varying one parameter should not affect perception of the other. Psychophysical experiments showed that this prediction is incorrect: reducing the contrast makes an edge look sharper; increasing the blur reduces the perceived contrast. Both of these effects can be explained by introducing a smoothed threshold to one of the processing stages of the model. It is shown that, with this modification,the model can predict the perceived contrast and blur of a number of edge profiles that differ markedly from the ideal Gaussian edge profiles on which the templates are based. With only a few exceptions, the results from all the experiments on blur and contrast perception can be explained reasonably well using one set of parameters for each subject. In the few cases where the model fails, possible extensions to the model are discussed.
Resumo:
Many workers have studied the ocular components which occur in eyes exhibiting differing amounts of central refractive error but few have ever considered the additional information that could be derived from a study of peripheral refraction. Before now, peripheral refraction has either been measured in real eyes or has otherwise been modelled in schematic eyes of varying levels of sophistication. Several differences occur between measured and modelled results which, if accounted for, could give rise to more information regarding the nature of the optical and retinal surfaces and their asymmetries. Measurements of ocular components and peripheral refraction, however, have never been made in the same sample of eyes. In this study, ocular component and peripheral refractive measurements were made in a sample of young near-emmetropic, myopic and hyperopic eyes. The data for each refractive group was averaged. A computer program was written to construct spherical surfaced schematic eyes from this data. More sophisticated eye models were developed making use of linear algebraic ray tracing program. This method allowed rays to be traced through toroidal aspheric surfaces which were translated or rotated with respect to each other. For simplicity, the gradient index optical nature of the crystalline lens was neglected. Various alterations were made in these eye models to reproduce the measured peripheral refractive patterns. Excellent agreement was found between the modelled and measured peripheral refractive values over the central 70o of the visual field. This implied that the additional biometric features incorporated in each eye model were representative of those which were present in the measured eyes. As some of these features are not otherwise obtainable using in vivo techniques, it is proposed that the variation of refraction in the periphery offers a very useful optical method for studying human ocular component dimensions.
Resumo:
In perceptual terms, the human body is a complex 3d shape which has to be interpreted by the observer to judge its attractiveness. Both body mass and shape have been suggested as strong predictors of female attractiveness. Normally body mass and shape co-vary, and it is difficult to differentiate their separate effects. A recent study suggested that altering body mass does not modulate activity in the reward mechanisms of the brain, but shape does. However, using computer generated female body-shaped greyscale images, based on a Principal Component Analysis of female bodies, we were able to construct images which covary with real female body mass (indexed with BMI) and not with body shape (indexed with WHR), and vice versa. Twelve observers (6 male and 6 female) rated these images for attractiveness during an fMRI study. The attractiveness ratings were correlated with changes in BMI and not WHR. Our primary fMRI results demonstrated that in addition to activation in higher visual areas (such as the extrastriate body area), changing BMI also modulated activity in the caudate nucleus, and other parts of the brain reward system. This shows that BMI, not WHR, modulates reward mechanisms in the brain and we infer that this may have important implications for judgements of ideal body size in eating disordered individuals.
Resumo:
Computer-based simulation is frequently used to evaluate the capabilities of proposed manufacturing system designs. Unfortunately, the real systems are often found to perform quite differently from simulation predictions and one possible reason for this is an over-simplistic representation of workers' behaviour within current simulation techniques. The accuracy of design predictions could be improved through a modelling tool that integrates with computer-based simulation and incorporates the factors and relationships that determine workers' performance. This paper explores the viability of developing a similar tool based on our previously published theoretical modelling framework. It focuses on evolving this purely theoretical framework towards a practical modelling tool that can actually be used to expand the capabilities of current simulation techniques. Based on an industrial study, the paper investigates how the theoretical framework works in practice, analyses strengths and weaknesses in its formulation, and proposes developments that can contribute towards enabling human performance modelling in a practical way.
Resumo:
Theprocess of manufacturing system design frequently includes modeling, and usually, this means applying a technique such as discrete event simulation (DES). However, the computer tools currently available to apply this technique enable only a superficial representation of the people that operate within the systems. This is a serious limitation because the performance of people remains central to the competitiveness of many manufacturing enterprises. Therefore, this paper explores the use of probability density functions to represent the variation of worker activity times within DES models.
Resumo:
The processing conducted by the visual system requires the combination of signals that are detected at different locations in the visual field. The processes by which these signals are combined are explored here using psychophysical experiments and computer modelling. Most of the work presented in this thesis is concerned with the summation of contrast over space at detection threshold. Previous investigations of this sort have been confounded by the inhomogeneity in contrast sensitivity across the visual field. Experiments performed in this thesis find that the decline in log contrast sensitivity with eccentricity is bilinear, with an initial steep fall-off followed by a shallower decline. This decline is scale-invariant for spatial frequencies of 0.7 to 4 c/deg. A detailed map of the inhomogeneity is developed, and applied to area summation experiments both by incorporating it into models of the visual system and by using it to compensate stimuli in order to factor out the effects of the inhomogeneity. The results of these area summation experiments show that the summation of contrast over area is spatially extensive (occurring over 33 stimulus carrier cycles), and that summation behaviour is the same in the fovea, parafovea, and periphery. Summation occurs according to a fourth-root summation rule, consistent with a “noisy energy” model. This work is extended to investigate the visual deficit in amblyopia, finding that area summation is normal in amblyopic observers. Finally, the methods used to study the summation of threshold contrast over area are adapted to investigate the integration of coherent orientation signals in a texture. The results of this study are described by a two-stage model, with a mandatory local combination stage followed by flexible global pooling of these local outputs. In each study, the results suggest a more extensive combination of signals in vision than has been previously understood.
Resumo:
Research indicates that although students are the ultimate 'beneficiaries of Information and Communication Technology (ICT)-based' higher education learning their voices have been neglected in its development. This paper attempts to redress this imbalance by illuminating students' perceptions of the use of Computer Assisted Learning (CAL) in an undergraduate accounting module. The findings suggest that students are in favour of using EQL in a supportive role only. Interviewees rejected the idea of replacing human tutors with machine tutors and they believed that most of their learning occurs in tutorials and ranked these as the most important component of the module.