934 resultados para Experimental Methods.
Resumo:
A combination of experimental methods was applied at a clogged, horizontal subsurface flow (HSSF) municipal wastewater tertiary treatment wetland (TW) in the UK, to quantify the extent of surface and subsurface clogging which had resulted in undesirable surface flow. The three dimensional hydraulic conductivity profile was determined, using a purpose made device which recreates the constant head permeameter test in-situ. The hydrodynamic pathways were investigated by performing dye tracing tests with Rhodamine WT and a novel multi-channel, data-logging, flow through Fluorimeter which allows synchronous measurements to be taken from a matrix of sampling points. Hydraulic conductivity varied in all planes, with the lowest measurement of 0.1 md1 corresponding to the surface layer at the inlet, and the maximum measurement of 1550 md1 located at a 0.4m depth at the outlet. According to dye tracing results, the region where the overland flow ceased received five times the average flow, which then vertically short-circuited below the rhizosphere. The tracer break-through curve obtained from the outlet showed that this preferential flow-path accounted for approximately 80% of the flow overall and arrived 8 h before a distinctly separate secondary flow-path. The overall volumetric efficiencyof the clogged system was 71% and the hydrology was simulated using a dual-path, dead-zone storage model. It is concluded that uneven inlet distribution, continuous surface loading and high rhizosphere resistance is responsible for the clog formation observed in this system. The average inlet hydraulic conductivity was 2 md1, suggesting that current European design guidelines, which predict that the system will reach an equilibrium hydraulic conductivity of 86 md1, do not adequately describe the hydrology of mature systems.
Resumo:
Experimental methods of policy evaluation are well-established in social policy and development eco-nomics but are rare in industrial and innovation policy. In this paper, we consider the arguments forapplying experimental methods to industrial policy measures, and propose an experimental policy eval-uation approach (which we call RCT+). This approach combines the randomised assignment of firmsto treatment and control groups with a longitudinal data collection strategy incorporating quantitativeand qualitative data (so-called mixed methods). The RCT+ approach is designed to provide a causativerather than purely summative evaluation, i.e. to assess both ‘whether’ and ‘how’ programme outcomesare achieved. In this paper, we assess the RCT+ approach through an evaluation of Creative Credits – aUK business-to-business innovation voucher initiative intended to promote new innovation partnershipsbetween SMEs and creative service providers. The results suggest the potential value of the RCT+ approachto industrial policy evaluation, and the benefits of mixed methods and longitudinal data collection.
Resumo:
This article uses a research project into the online conversations of sex offenders and the children they abuse to further the arguments for the acceptability of experimental work as a research tool for linguists. The research reported here contributes to the growing body of work within linguistics that has found experimental methods to be useful in answering questions about representation and constraints on linguistic expression (Hemforth 2013). The wider project examines online identity assumption in online paedophile activity and the policing of such activity, and involves dealing with the linguistic analysis of highly sensitive sexual grooming transcripts. Within the linguistics portion of the project, we examine theories of idiolect and identity through analysis of the ‘talk’ of perpetrators of online sexual abuse, and of the undercover officers that must assume alternative identities in order to investigate such crimes. The essential linguistic question in this article is methodological and concerns the applicability of experimental work to exploration of online identity and identity disguise. Although we touch on empirical questions, such as the sufficiency of linguistic description that will enable convincing identity disguise, we do not explore the experimental results in detail. In spite of the preference within a range of discourse analytical paradigms for ‘naturally occurring’ data, we argue that not only does the term prove conceptually problematic, but in certain contexts, and particularly in the applied forensic context described, a rejection of experimentally elicited data would limit the possible types and extent of analyses. Thus, it would restrict the contribution that academic linguistics can make in addressing a serious social problem.
Resumo:
Several problems arise when measuring the mode II interlaminar fracture toughness using a Transverse Crack Tension specimen; in particular, the fracture toughness depends on the geometry of the specimen and cannot be considered a material parameter. A preliminary experimental campaign was conducted on TCTs of different sizes but no fracture toughness was measured because the TCTs failed in an unacceptable way, invalidating the tests. A comprehensive numerical and experimental investigation is conducted to identify the main causes of this behaviour and a modification of the geometry of the specimen is proposed. It is believed that the obtained results represent a significant contribution in the understanding of the TCT test as a mode II characterization procedure and, at the same time, provide new guidelines to characterize the mode II crack propagation under tensile loads.
Resumo:
The thermal bone necrosis induced during a drilling process is a frequent and potential phenomenon, which contributes to post-operative problems. The frictional heat generated from the contact between the drill bit and the hole wall is unavoidable. However, understanding advanced techniques for acquiring reliable thermal data on bone drilling is important to ensure the quality of the drilled hole. The purpose of this study is to present two different experimental methods to analyse the drilling conditions that generate the lower temperatures, avoiding the occurrence of thermal bone necrosis. Ex-vivo bovine bones were used to simulate the drilling process considering the effect of drill bit diameter, drill speed and feed-rate. Different experiments were performed to assess the repeatability of the tests. The results identified the drill bit diameter as the most critical parameter for inducing higher temperatures in bone drilling.
Resumo:
Until recently, the hot-rolled steel members have been recognized as the most popular and widely used steel group, but in recent times, the use of cold-formed high strength steel members has rapidly increased. However, the structural behavior of light gauge high strength cold-formed steel members characterized by various buckling modes is not yet fully understood. The current cold-formed steel sections such as C- and Z-sections are commonly used because of their simple forming procedures and easy connections, but they suffer from certain buckling modes. It is therefore important that these buckling modes are either delayed or eliminated to increase the ultimate capacity of these members. This research is therefore aimed at developing a new cold-formed steel beam with two torsionally rigid rectangular hollow flanges and a slender web formed using intermittent screw fastening to enhance the flexural capacity while maintaining a minimum fabrication cost. This thesis describes a detailed investigation into the structural behavior of this new Rectangular Hollow Flange Beam (RHFB), subjected to flexural action The first phase of this research included experimental investigations using thirty full scale lateral buckling tests and twenty two section moment capacity tests using specially designed test rigs to simulate the required loading and support conditions. A detailed description of the experimental methods, RHFB failure modes including local, lateral distortional and lateral torsional buckling modes, and moment capacity results is presented. A comparison of experimental results with the predictions from the current design rules and other design methods is also given. The second phase of this research involved a methodical and comprehensive investigation aimed at widening the scope of finite element analysis to investigate the buckling and ultimate failure behaviours of RHFBs subjected to flexural actions. Accurate finite element models simulating the physical conditions of both lateral buckling and section moment capacity tests were developed. Comparison of experimental and finite element analysis results showed that the buckling and ultimate failure behaviour of RHFBs can be simulated well using appropriate finite element models. Finite element models simulating ideal simply supported boundary conditions and a uniform moment loading were also developed in order to use in a detailed parametric study. The parametric study results were used to review the current design rules and to develop new design formulae for RHFBs subjected to local, lateral distortional and lateral torsional buckling effects. Finite element analysis results indicate that the discontinuity due to screw fastening has a noticeable influence only for members in the intermediate slenderness region. Investigations into different combinations of thicknesses in the flange and web indicate that increasing the flange thickness is more effective than web thickness in enhancing the flexural capacity of RHFBs. The current steel design standards, AS 4100 (1998) and AS/NZS 4600 (1996) are found sufficient to predict the section moment capacity of RHFBs. However, the results indicate that the AS/NZS 4600 is more accurate for slender sections whereas AS 4100 is more accurate for compact sections. The finite element analysis results further indicate that the current design rules given in AS/NZS 4600 is adequate in predicting the member moment capacity of RHFBs subject to lateral torsional buckling effects. However, they were inadequate in predicting the capacities of RHFBs subject to lateral distortional buckling effects. This thesis has therefore developed a new design formula to predict the lateral distortional buckling strength of RHFBs. Overall, this thesis has demonstrated that the innovative RHFB sections can perform well as economically and structurally efficient flexural members. Structural engineers and designers should make use of the new design rules and the validated existing design rules to design the most optimum RHFB sections depending on the type of applications. Intermittent screw fastening method has also been shown to be structurally adequate that also minimises the fabrication cost. Product manufacturers and builders should be able to make use of this in their applications.
Resumo:
This research has established, through ultrasound, near infrared spectroscopy and biomechanics experiments, parameters and parametric relationships that can form the framework for quantifying the integrity of the articular cartilage-on-bone laminate, and objectively distinguish between normal/healthy and abnormal/degenerated joint tissue, with a focus on articular cartilage. This has been achieved by: 1. using traditional experimental methods to produce new parameters for cartilage assessment; 2. using novel methodologies to develop new parameters; and 3. investigating the interrelationships between mechanical, structural and molec- ular properties to identify and select those parameters and methodologies that can be used in a future arthroscopic probe based on points 1 and 2. By combining the molecular, micro- and macro-structural characteristics of the tissue with its mechanical properties, we arrive at a set of critical benchmarking parameters for viable and early-stage non-viable cartilage. The interrelationships between these characteristics, examined using a multivariate analysis based on principal components analysis, multiple linear regression and general linear modeling, could then to deter- mine those parameters and relationships which have the potential to be developed into a future clinical device. Specifically, this research has found that the ultrasound and near infrared techniques can subsume the mechanical parameters and combine to characterise the tissue at the molecular, structural and mechanical levels over the full depth of the cartilage matrix. It is the opinion in this thesis that by enabling the determination of the precise area of in uence of a focal defect or disease in the joint, demarcating the boundaries of articular cartilage with dierent levels of degeneration around a focal defect, better surgical decisions that will advance the processes of joint management and treatment will be achieved. Providing the basis for a surgical tool, this research will contribute to the enhancement and quanti�cation of arthroscopic procedures, extending to post- treatment monitoring and as a research tool, will enable a robust method for evaluating developing (particularly focalised) treatments.
Resumo:
The hydrodynamic environment “created” by bioreactors for the culture of a tissue engineered construct (TEC) is known to influence cell migration, proliferation and extra cellular matrix production. However, tissue engineers have looked at bioreactors as black boxes within which TECs are cultured mainly by trial and error, as the complex relationship between the hydrodynamic environment and tissue properties remains elusive, yet is critical to the production of clinically useful tissues. It is well known in the chemical and biotechnology field that a more detailed description of fluid mechanics and nutrient transport within process equipment can be achieved via the use of computational fluid dynamics (CFD) technology. Hence, the coupling of experimental methods and computational simulations forms a synergistic relationship that can potentially yield greater and yet, more cohesive data sets for bioreactor studies. This review aims at discussing the rationale of using CFD in bioreactor studies related to tissue engineering, as fluid flow processes and phenomena have direct implications on cellular response such as migration and/or proliferation. We conclude that CFD should be seen by tissue engineers as an invaluable tool allowing us to analyze and visualize the impact of fluidic forces and stresses on cells and TECs.
Resumo:
Understanding the impacts of traffic and climate change on water quality helps decision makers to develop better policy and plans for dealing with unsustainable urban and transport development. This chapter presents detailed methodologies developed for sample collection and testing for heavy metals and total petroleum hydrocarbons, as part of a research study to investigate the impacts of climate change and changes to urban traffic characteristics on pollutant build-up and wash-off from urban road surfaces. Cadmium, chromium, nickel, copper, lead, iron, aluminium, manganese and zinc were the target heavy metals, and selected gasoline and diesel range organics were the target total petroleum hydrocarbons for this study. The study sites were selected to encompass the urban traffic characteristics of the Gold Coast region, Australia. An improved sample collection method referred to as ‘the wet and dry vacuum system’ for the pollutant build-up, and an effective wash-off plan to incorporate predicted changes to rainfall characteristics due to climate change, were implemented. The novel approach to sample collection for pollutant build-up helped to maintain the integrity of collection efficiency. The wash-off plan helped to incorporate the predicted impacts of climate change in the Gold Coast region. The robust experimental methods developed will help in field sample collection and chemical testing of different stormwater pollutants in build-up and wash-off.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
The effects of tumour motion during radiation therapy delivery have been widely investigated. Motion effects have become increasingly important with the introduction of dynamic radiotherapy delivery modalities such as enhanced dynamic wedges (EDWs) and intensity modulated radiation therapy (IMRT) where a dynamically collimated radiation beam is delivered to the moving target, resulting in dose blurring and interplay effects which are a consequence of the combined tumor and beam motion. Prior to this work, reported studies on the EDW based interplay effects have been restricted to the use of experimental methods for assessing single-field non-fractionated treatments. In this work, the interplay effects have been investigated for EDW treatments. Single and multiple field treatments have been studied using experimental and Monte Carlo (MC) methods. Initially this work experimentally studies interplay effects for single-field non-fractionated EDW treatments, using radiation dosimetry systems placed on a sinusoidaly moving platform. A number of wedge angles (60º, 45º and 15º), field sizes (20 × 20, 10 × 10 and 5 × 5 cm2), amplitudes (10-40 mm in step of 10 mm) and periods (2 s, 3 s, 4.5 s and 6 s) of tumor motion are analysed (using gamma analysis) for parallel and perpendicular motions (where the tumor and jaw motions are either parallel or perpendicular to each other). For parallel motion it was found that both the amplitude and period of tumor motion affect the interplay, this becomes more prominent where the collimator tumor speeds become identical. For perpendicular motion the amplitude of tumor motion is the dominant factor where as varying the period of tumor motion has no observable effect on the dose distribution. The wedge angle results suggest that the use of a large wedge angle generates greater dose variation for both parallel and perpendicular motions. The use of small field size with a large tumor motion results in the loss of wedged dose distribution for both parallel and perpendicular motion. From these single field measurements a motion amplitude and period have been identified which show the poorest agreement between the target motion and dynamic delivery and these are used as the „worst case motion parameters.. The experimental work is then extended to multiple-field fractionated treatments. Here a number of pre-existing, multiple–field, wedged lung plans are delivered to the radiation dosimetry systems, employing the worst case motion parameters. Moreover a four field EDW lung plan (using a 4D CT data set) is delivered to the IMRT quality control phantom with dummy tumor insert over four fractions using the worst case parameters i.e. 40 mm amplitude and 6 s period values. The analysis of the film doses using gamma analysis at 3%-3mm indicate the non averaging of the interplay effects for this particular study with a gamma pass rate of 49%. To enable Monte Carlo modelling of the problem, the DYNJAWS component module (CM) of the BEAMnrc user code is validated and automated. DYNJAWS has been recently introduced to model the dynamic wedges. DYNJAWS is therefore commissioned for 6 MV and 10 MV photon energies. It is shown that this CM can accurately model the EDWs for a number of wedge angles and field sizes. The dynamic and step and shoot modes of the CM are compared for their accuracy in modelling the EDW. It is shown that dynamic mode is more accurate. An automation of the DYNJAWS specific input file has been carried out. This file specifies the probability of selection of a subfield and the respective jaw coordinates. This automation simplifies the generation of the BEAMnrc input files for DYNJAWS. The DYNJAWS commissioned model is then used to study multiple field EDW treatments using MC methods. The 4D CT data of an IMRT phantom with the dummy tumor is used to produce a set of Monte Carlo simulation phantoms, onto which the delivery of single field and multiple field EDW treatments is simulated. A number of static and motion multiple field EDW plans have been simulated. The comparison of dose volume histograms (DVHs) and gamma volume histograms (GVHs) for four field EDW treatments (where the collimator and patient motion is in the same direction) using small (15º) and large wedge angles (60º) indicates a greater mismatch between the static and motion cases for the large wedge angle. Finally, to use gel dosimetry as a validation tool, a new technique called the „zero-scan method. is developed for reading the gel dosimeters with x-ray computed tomography (CT). It has been shown that multiple scans of a gel dosimeter (in this case 360 scans) can be used to reconstruct a zero scan image. This zero scan image has a similar precision to an image obtained by averaging the CT images, without the additional dose delivered by the CT scans. In this investigation the interplay effects have been studied for single and multiple field fractionated EDW treatments using experimental and Monte Carlo methods. For using the Monte Carlo methods the DYNJAWS component module of the BEAMnrc code has been validated and automated and further used to study the interplay for multiple field EDW treatments. Zero-scan method, a new gel dosimetry readout technique has been developed for reading the gel images using x-ray CT without losing the precision and accuracy.
Resumo:
Understanding the impacts of traffic and climate change on water quality helps decision makers to develop better policy and plans for dealing with unsustainable urban and transport development. This chapter presents detailed methodologies developed for sample collection and testing for heavy metals and total petroleum hydrocarbons, as part of a research study to investigate the impacts of climate change and changes to urban traffic characteristics on pollutant build-up and wash-off from urban road surfaces. Cadmium, chromium, nickel, copper, lead, iron, aluminium, manganese and zinc were the target heavy metals, and selected gasoline and diesel range organics were the target total petroleum hydrocarbons for this study. The study sites were selected to encompass the urban traffic characteristics of the Gold Coast region, Australia. An improved sample collection method referred to as ‘the wet and dry vacuum system’ for the pollutant build-up, and an effective wash-off plan to incorporate predicted changes to rainfall characteristics due to climate change, were implemented. The novel approach to sample collection for pollutant build-up helped to maintain the integrity of collection efficiency. The wash-off plan helped to incorporate the predicted impacts of climate change in the Gold Coast region. The robust experimental methods developed will help in field sample collection and chemical testing of different stormwater pollutants in build-up and wash-off.
Resumo:
Mobile devices and smartphones have become a significant communication channel for everyday life. The sensing capabilities of mobile devices are expanding rapidly, and sensors embedded in these devices are cheaper and more powerful than before. It is evident that mobile devices have become the most suitable candidates to sense contextual information without needing extra tools. However, current research shows only a limited number of sensors are being explored and investigated. As a result, it still needs to be clarified what forms of contextual information extracted from mo- bile sensors are useful. Therefore, this research investigates the context sensing using current mobile sensors, the study follows experimental methods and sensor data is evaluated and synthesised, in order to deduce the value of various sensors and combinations of sensor for the use in context-aware mobile applications. This study aims to develop a context fusion framework that will enhance the context-awareness on mobile applications, as well as exploring innovative techniques for context sensing on smartphone devices.
Resumo:
This thesis examines how psychosocial factors influence the report of persistent symptoms after mild traumatic brain injury. Using quasi-experimental methods, the research program demonstrates how factors unrelated to trauma-induced physiological brain damage can contribute to persistent symptoms after a mild traumatic brain injury. The results of this thesis highlight the possibility that outcome from mild traumatic brain injury could be improved by targeting psychosocial factors.
Resumo:
The aim of the thesis was to compare the correspondence of the outcome a computer assisted program appearance compared to the original image. The aspect of the study was directed to embroidery with household machines. The study was made from the usability point of view with Brother's PE-design 6.0 embroidery design programs two automatic techniques; multicoloured fragment design and multicoloured stitch surface design. The study's subject is very current because of the fast development of machine embroidery. The theory is based on history of household sewing machines, embroidery sewing machines, stitch types in household sewing machines, embroidery design programs as well as PE-design 6.0 embroidery design program's six automatic techniques. Additionally designing of embroidery designs were included: original image, digitizing, punching, applicable sewing threads as well as the connection between embroidery designs and materials used on embroidery. Correspondences of sewn appearances were examined with sewing experimental methods. 18 research samples of five original image were sewn with both techniques. Experiments were divided into four testing stages in design program. Every testing stage was followed by experimental sewing with Brother Super Galaxie 3100D embroidery machine. Experiments were reported into process files and forms made for the techniques. Research samples were analysed on images syntactic bases with sensory perception assessment. Original images and correspondence of the embroidery appearances were analysed with a form made of it. The form was divided into colour and shape assessment in five stage-similarity-scale. Based on this correspondence analysis it can be said that with both automatic techniques the best correspondence of colour and shape was achieved by changing the standard settings and using the makers own thread chart and edited original image. According to the testing made it is impossible to inform where the image editing possibilities of the images are sufficient or does the optimum correspondence need a separate program. When aiming at correspondence between appearances of two images the computer is unable to trace by itself the appearance of the original image. Processing a computer program assisted embroidery image human perception and personal decision making are unavoidable.