881 resultados para large course design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investigations into the modelling techniques that depict the transport of discrete phases (gas bubbles or solid particles) and model biochemical reactions in a bubble column reactor are discussed here. The mixture model was used to calculate gas-liquid, solid-liquid and gasliquid-solid interactions. Multiphase flow is a difficult phenomenon to capture, particularly in bubble columns where the major driving force is caused by the injection of gas bubbles. The gas bubbles cause a large density difference to occur that results in transient multi-dimensional fluid motion. Standard design procedures do not account for the transient motion, due to the simplifying assumptions of steady plug flow. Computational fluid dynamics (CFD) can assist in expanding the understanding of complex flows in bubble columns by characterising the flow phenomena for many geometrical configurations. Therefore, CFD has a role in the education of chemical and biochemical engineers, providing the examples of flow phenomena that many engineers may not experience, even through experimentation. The performance of the mixture model was investigated for three domains (plane, rectangular and cylindrical) and three flow models (laminar, k-e turbulence and the Reynolds stresses). mThis investigation raised many questions about how gas-liquid interactions are captured numerically. To answer some of these questions the analogy between thermal convection in a cavity and gas-liquid flow in bubble columns was invoked. This involved modelling the buoyant motion of air in a narrow cavity for a number of turbulence schemes. The difference in density was caused by a temperature gradient that acted across the width of the cavity. Multiple vortices were obtained when the Reynolds stresses were utilised with the addition of a basic flow profile after each time step. To implement the three-phase models an alternative mixture model was developed and compared against a commercially available mixture model for three turbulence schemes. The scheme where just the Reynolds stresses model was employed, predicted the transient motion of the fluids quite well for both mixture models. Solid-liquid and then alternative formulations of gas-liquid-solid model were compared against one another. The alternative form of the mixture model was found to perform particularly well for both gas and solid phase transport when calculating two and three-phase flow. The improvement in the solutions obtained was a result of the inclusion of the Reynolds stresses model and differences in the mixture models employed. The differences between the alternative mixture models were found in the volume fraction equation (flux and deviatoric stress tensor terms) and the viscosity formulation for the mixture phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is concerned with the assessment of a newer version of the spout-fluid bed where the gas is supplied from a common plenum and the distributor controls the operational phenomenon. Thus the main body of the work deals with the effect of the distributor design on the mixing and segregation of solids in a spout-filled bed. The effect of distributor design in the conventional fluidised bed and of variation of the gas inlet diameter in a spouted bed were also briefly investigated for purpose of comparison. Large particles were selected for study because they are becoming increasingly important in industrial fluidised beds but have not been thoroughly investigated. The mean particle diameters of the fraction ranged from 550 to 2400 mm, and their specific gravity from 0.97 to 2.45. Only work carried out with binary systems is reported here. The effect of air velocity, particle properties, bed height, the relative amount of jetsam and flotsam and initial conditions on the steady-state concentration profiles were assessed with selected distributors. The work is divided into three sections. Sections I and II deal with the fluidised bed and spouted bed systems. Section III covers the development of the spout-filled bed and its behaviour with reference to distributor design and it is shown how benefits of both spouting and fluidising phenomena can be exploited. In the fluidisation zone, better mixing is achieved by distributors which produce a large initial bubble diameter. Some common features exist between the behaviour of unidensity jetsam-rich systems and different density flotsam-rich systems. The shape factor does not seem to have an affect as long as it is only restricted to the minor component. However, in the case of the major component, particle shape significantly affects the final results. Studies of aspect ratio showed that there is a maximum (1.5) above which slugging occurs and the effect of the distributor design is nullified. A mixing number was developed for unidensity spherical rich systems, which proved to be extremely useful in quantifying the variation in mixing and segregation with changes in distributor design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer-Based Learning systems of one sort or another have been in existence for almost 20 years, but they have yet to achieve real credibility within Commerce, Industry or Education. A variety of reasons could be postulated for this, typically: - cost - complexity - inefficiency - inflexibility - tedium Obviously different systems deserve different levels and types of criticism, but it still remains true that Computer-Based Learning (CBL) is falling significantly short of its potential. Experience of a small, but highly successful CBL system within a large, geographically distributed industry (the National Coal Board) prompted an investigation into currently available packages, the original intention being to purchase the most suitable software and run it on existing computer hardware, alongside existing software systems. It became apparent that none of the available CBL packages were suitable, and a decision was taken to develop an in-house Computer-Assisted Instruction system according to the following criteria: - cheap to run; - easy to author course material; - easy to use; - requires no computing knowledge to use (as either an author or student) ; - efficient in the use of computer resources; - has a comprehensive range of facilities at all levels. This thesis describes the initial investigation, resultant observations and the design, development and implementation of the SCHOOL system. One of the principal characteristics c£ SCHOOL is that it uses a hierarchical database structure for the storage of course material - thereby providing inherently a great deal of the power, flexibility and efficiency originally required. Trials using the SCHOOL system on IBM 303X series equipment are also detailed, along with proposed and current development work on what is essentially an operational CBL system within a large-scale Industrial environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports the development of a reliable method for the prediction of response to electromagnetically induced vibration in large electric machines. The machines of primary interest are DC ship-propulsion motors but much of the work reported has broader significance. The investigation has involved work in five principal areas. (1) The development and use of dynamic substructuring methods. (2) The development of special elements to represent individual machine components. (3) Laboratory scale investigations to establish empirical values for properties which affect machine vibration levels. (4) Experiments on machines on the factory test-bed to provide data for correlation with prediction. (5) Reasoning with regard to the effect of various design features. The limiting factor in producing good models for machines in vibration is the time required for an analysis to take place. Dynamic substructuring methods were adopted early in the project to maximise the efficiency of the analysis. A review of existing substructure- representation and composite-structure assembly methods includes comments on which are most suitable for this application. In three appendices to the main volume methods are presented which were developed by the author to accelerate analyses. Despite significant advances in this area, the limiting factor in machine analyses is still time. The representation of individual machine components was addressed as another means by which the time required for an analysis could be reduced. This has resulted in the development of special elements which are more efficient than their finite-element counterparts. The laboratory scale experiments reported were undertaken to establish empirical values for the properties of three distinct features - lamination stacks, bolted-flange joints in rings and cylinders and the shimmed pole-yoke joint. These are central to the preparation of an accurate machine model. The theoretical methods are tested numerically and correlated with tests on two machines (running and static). A system has been devised with which the general electromagnetic forcing may be split into its most fundamental components. This is used to draw some conclusions about the probable effects of various design features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Changes in modern structural design have created a demand for products which are light but possess high strength. The objective is a reduction in fuel consumption and weight of materials to satisfy both economic and environmental criteria. Cold roll forming has the potential to fulfil this requirement. The bending process is controlled by the shape of the profile machined on the periphery of the rolls. A CNC lathe can machine complicated profiles to a high standard of precision, but the expertise of a numerical control programmer is required. A computer program was developed during this project, using the expert system concept, to calculate tool paths and consequently to expedite the procurement of the machine control tapes whilst removing the need for a skilled programmer. Codifying the expertise of a human and the encapsulation of knowledge within a computer memory, destroys the dependency on highly trained people whose services can be costly, inconsistent and unreliable. A successful cold roll forming operation, where the product is geometrically correct and free from visual defects, is not easy to attain. The geometry of the sheet after travelling through the rolling mill depends on the residual strains generated by the elastic-plastic deformation. Accurate evaluation of the residual strains can provide the basis for predicting the geometry of the section. A study of geometric and material non-linearity, yield criteria, material hardening and stress-strain relationships was undertaken in this research project. The finite element method was chosen to provide a mathematical model of the bending process and, to ensure an efficient manipulation of the large stiffness matrices, the frontal solution was applied. A series of experimental investigations provided data to compare with corresponding values obtained from the theoretical modelling. A computer simulation, capable of predicting that a design will be satisfactory prior to the manufacture of the rolls, would allow effort to be concentrated into devising an optimum design where costs are minimised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work, the elastic scattering of fast neutrons from iron and concrete samples were studied at incident neutron energies of 14.0 and 14.4 Mev, using a neutron spectrometer based on the associated particle time-of-flight technique. These samples were chosen because of their importance in the design of fusion reactor shielding and construction. Using the S.A.M.E.S. accelerator and the 3 M v Dynamitron accelerator at the Radiation Centre, 14.0 and 14.4 Mev neutrons were produced by the T(d, n)4He reaction at incident deuteron energies of 140 keV and 900 keV mass III ions respectively. The time of origin of the neutron was determined by detecting the associated alpha particles. The samples used were extended flat plates of thicknesses up to 1.73 mean free paths for iron and 2.3 mean free paths for concrete. The associated alpha particles and fast neutrons were detected by means of a plastic scintillator mounted on a fast focused photomultiplier tube. The differential neutron elastic scattering cross-sections were measured for 14 Mev neutrons in various thicknesses of iron and concrete in the angular range from zero to 90°. In addition, the angular distributions of 14.4 Mev neutrons after passing through extended samples of iron were measured at several scattering angles in the same angular range. The measurements obtained for the thin sample of iron were compared with the results of Coon et al. The differential cross-sections for the thin iron sample were also analyzed on the optical model using the computer code RAROMP. For the concrete sample, the angular distribution of the thin sample was compared with the cross-sections calculated from the major constituent elements of concrete, and with the predicted values of the optical model for those elements. No published data could be found to compare with the results of the concrete differential cross-sections. In the case of thick samples of iron and concrete, the number of scattered neutrons were compared with a phenomological calculation based on the continuous slowing down model. The variation of measured cross-sections with sample thickness were found to follow the empirical relation σ = σ0 eαx. By using the universal constant "K", good fits were obtained to the experimental data. In parallel with the work at 14.0 and 14.4 Mev, an associated particle time-of-flight spectrometer was investigated which used the 2H(d,n)3He reaction for 3.02 Mev neutron energy at the incident deuteron energy of 1 Mev.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pneumonia caused by Pneumocystis carinii is ultimately responsible for the death of many acquired immunodeficiency syndrome (AIDS) patients. Large doses of trimethoprim and pyrimethamine in combination with a sulphonamide and/or pentamidine suppress the infection but produce serious side-effects and seldom prevent recurrence after treatment withdrawal. However, the partial success of the aforementioned antifolates, and also trimetrexate used alone, does suggest dihydrofolate reductase (DHFR) as a target for the development of antipneumocystis agents. From the DHFR inhibitory activities of 3'-substituted pyrimethamine analogues it was suggested that the 3'-(3'',3''-dimethyltriazen-1''-yl) substituent may be responsible for the greater activity for the P.carinii over the mammalian enzyme. Crystallographic and molecular modeling studies revealed considerable geometrical and electronic differences between the triazene and the chemically related formamidine functions that may account for the differences in DHFR inhibitory profiles. Structural and electronic parameters calculated for a series of 3'-(3'',3''-disubstitutedtriazen-1''-yl) pyrimethamine analogues did not correlate with the DHFR inhibitory activities. However, the in vitro screening against P.carinii DHFR revealed that the 3''-hydroxyethyl-3''-benzyl analogue was the most active and selective. Models of the active sites of human and P.carinii DHFRs were constructed using DHFR sequence and structural homology data which had identified key residues involved in substrate and cofactor binding. Low energy conformations of the 3'',3''-dimethyl and 3''-hydroxyethyl-3''-benzyle analogues, determined from nuclear magnetic resonance studies and theoretical calculations, were docked by superimposing the diaminopyrimidine fragment onto a previously docked pyrimethamine analogue. Enzyme kinetic data supported the 3''-hydroxyethyl-3''-benzyl moiety being located in the NADPH binding groove. The 3''-benzyl substituent was able to locate to within 3 AA of a valine residue in the active site of P.carinii DHFR thereby producing a hydrophobic contact. The equivalent residue in human DHFR is threonine, more hydrophilic and less likely to be involved in such a contact. This difference may account for the greater inhibitory activity this analogue has for P.carinii DHFR and provide a basis for future drug design. From an in vivo model of PCP in immunosuppressed rats it was established that the 3"-hydroxyethyl-3"-benzyl analogue was able to reduce the.P.carinii burden more effectively with increasing doses, without causmg any visible signs of toxicity. However, equivalent doses were not as effective as pentamidine, a current treatment of choice for Pneumocystis carinii pneumonia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is concerned with quality and productivity aspects of traditional house building. The research focuses on these issues by concentrating on the services and finishing stages of the building process. These are work stages which have not been fully investigated in previous productivity related studies. The primary objective of the research is to promote an integrated design and construction led approach to traditional house building based on an original concept of 'development cycles'. This process involves the following: site monitoring; the analysis of work operations; implementing design and construction changes founded on unique information collected during site monitoring; and subsequent re-monitoring to measure and assess Ihe effect of change. A volume house building firm has been involved in this applied research and has allowed access to its sites for production monitoring purposes. The firm also assisted in design detailing for a small group of 'experimental' production houses where various design and construction changes were implemented. Results from the collaborative research have shown certain quality and productivity improvements to be possible using this approach, albeit on a limited scale at this early experimental stage. The improvements have been possible because an improved activity sampling technique, developed for, and employed by the study, has been able to describe why many quality and productivity related problems occur during site building work. Experience derived from the research has shown the following attributes to be important: positive attitudes towards innovation; effective communication; careful planning and organisation; and good coordination and control at site level. These are all essential aspects of quality led management and determine to a large extent the overall success of this approach. Future work recommendations must include a more widespread use of innovative practices so that further design and construction modifications can be made. By doing this, productivity can be improved, cost savings made and better quality afforded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is the result of an action-research-type study of the diversification effort of part of a major U.K. industrial company. Work in contingency theory concerning the impact of environmental factors on organizational design, and the systemic model of viable systems put forward by Stafford Beer form the theoretical basis of the vvork. The two streams of thought are compared and found to offer similar conclusions about the design of effective organizations. These findings are taken as the framework for an analysis both of organization structures for promoting innovation described in the literature, and of those employed by the company for this purpose in recent years. Much attention is given to the use of venture groups, and conclusions are drawn on particular factors which may influence their success or failure. Both theoretical considerations, and the examination of the company' s recent experience suggested that the formation of the policy of diversification, as well as the method of implementation of the police, might affect its outcorre. Attention is therefore focused on the policy-making and planning process, and in particular on possible problems that this process could generate in a multi-division company. The view finally taken of diversification effort is that it should be regarded as a learning system. This view helps to expose some ambiguities in the concepts of success and failure in this area, and demonstrates considerable weaknesses in traditional project evaluation procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the Operations Management field, sustainable procurement has emerged as a way to green the purchasing and supply process. This paper explores issues in sustainable procurement training. The authors formed an interdisciplinary team to design, deliver and evaluate a training programme to promote and develop sustainable procurement in the United Kingdom health sector. Particular features of the project were its engagement with evolving and contested understandings of sustainable procurement and of the underlying concept of sustainable development and its recognition that relevant knowledge in the field is both incomplete and widely diffused through the procurement community. Eight practitioner groups worked together on themes to develop their understanding of sustainable procurement using the Blackboard virtual learning environment. Group interviews were conducted upon completion of the course and again three months later to explore qualitatively participants' experience of learning and implementing sustainable procurement. Although the course was delivered to practitioners, it might be modified for undergraduate and graduate students as it comprised the use of online activities in virtual learning environments, case studies and a broad range of literature. The course was also particularly significant in the context of contemporary policy moves in the United Kingdom and elsewhere to promote the role of higher education institutions in delivering workplace-based, high-skills education consistent with strategic policy considerations (see, for example, DIUS, 2008).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific-purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. Objectives: The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Methods: Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. Results: The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a content-based design outperforms the traditional VLE-based design. © 2011 Wessa et al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

T cell activation is the final step in a complex pathway through which pathogen-derived peptide fragments can elicit an immune response. For it to occur, peptides must form stable complexes with Major Histocompatibility Complex (MHC) molecules and be presented on the cell surface. Computational predictors of MHC binding are often used within in silico vaccine design pathways. We have previously shown that, paradoxically, most bacterial proteins known experimentally to elicit an immune response in disease models are depleted in peptides predicted to bind to human MHC alleles. The results presented here, derived using software proven through benchmarking to be the most accurate currently available, show that vaccine antigens contain fewer predicted MHC-binding peptides than control bacterial proteins from almost all subcellular locations with the exception of cell wall and some cytoplasmic proteins. This effect is too large to be explained from the undoubted lack of precision of the software or from the amino acid composition of the antigens. Instead, we propose that pathogens have evolved under the influence of the host immune system so that surface proteins are depleted in potential MHC-binding peptides, and suggest that identification of a protein likely to contain a single immuno-dominant epitope is likely to be a productive strategy for vaccine design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Huge advertising budgets are invested by firms to reach and convince potential consumers to buy their products. To optimize these investments, it is fundamental not only to ensure that appropriate consumers will be reached, but also that they will be in appropriate reception conditions. Marketing research has focused on the way consumers react to advertising, as well as on some individual and contextual factors that could mediate or moderate the ad impact on consumers (e.g. motivation and ability to process information or attitudes toward advertising). Nevertheless, a factor that potentially influences consumers’ advertising reactions has not yet been studied in marketing research: fatigue. Fatigue can yet impact key variables of advertising processing, such as cognitive resources availability (Lieury 2004). Fatigue is felt when the body warns to stop an activity (or inactivity) to have some rest, allowing the individual to compensate for fatigue effects. Dittner et al. (2004) defines it as “the state of weariness following a period of exertion, mental or physical, characterized by a decreased capacity for work and reduced efficiency to respond to stimuli.’’ It signals that resources will lack if we continue with the ongoing activity. According to Schmidtke (1969), fatigue leads to troubles in information reception, in perception, in coordination, in attention getting, in concentration and in thinking. In addition, for Markle (1984) fatigue generates a decrease in memory, and in communication ability, whereas it increases time reaction, and number of errors. Thus, fatigue may have large effects on advertising processing. We suggest that fatigue determines the level of available resources. Some research about consumer responses to advertising claim that complexity is a fundamental element to take into consideration. Complexity determines the cognitive efforts the consumer must provide to understand the message (Putrevu et al. 2004). Thus, we suggest that complexity determines the level of required resources. To study this complex question about need and provision of cognitive resources, we draw upon Resource Matching Theory. Anand and Sternthal (1989, 1990) are the first to state the Resource Matching principle, saying that an ad is most persuasive when the resources required to process it match the resources the viewer is willing and able to provide. They show that when the required resources exceed those available, the message is not entirely processed by the consumer. And when there are too many available resources comparing to those required, the viewer elaborates critical or unrelated thoughts. According to the Resource Matching theory, the level of resource demanded by an ad can be high or low, and is mostly determined by the ad’s layout (Peracchio and Myers-Levy, 1997). We manipulate the level of required resources using three levels of ad complexity (low – high – extremely high). On the other side, the resource availability of an ad viewer is determined by lots of contextual and individual variables. We manipulate the level of available resources using two levels of fatigue (low – high). Tired viewers want to limit the processing effort to minimal resource requirements by making heuristics, forming overall impression at first glance. It will be easier for them to decode the message when ads are very simple. On the contrary, the most effective ads for viewers who are not tired are complex enough to draw their attention and fully use their resources. They will use more analytical strategies, looking at the details of the ad. However, if ads are too complex, they will be too difficult to understand. The viewer will be discouraged to process information and will overlook the ad. The objective of our research is to study fatigue as a moderating variable of advertising information processing. We run two experimental studies to assess the effect of fatigue on visual strategies, comprehension, persuasion and memorization. In study 1, thirty-five undergraduate students enrolled in a marketing research course participated in the experiment. The experimental design is 2 (tiredness level: between subjects) x 3 (ad complexity level: within subjects). Participants were randomly assigned a schedule time (morning: 8-10 am or evening: 10-12 pm) to perform the experiment. We chose to test subjects at various moments of the day to obtain maximum variance in their fatigue level. We use Morningness / Eveningness tendency of participants (Horne & Ostberg, 1976) as a control variable. We assess fatigue level using subjective measures - questionnaire with fatigue scales - and objective measures - reaction time and number of errors. Regarding complexity levels, we have designed our own ads in order to keep aspects other than complexity equal. We ran a pretest using the Resource Demands scale (Keller and Bloch 1997) and by rating them on complexity like Morrison and Dainoff (1972) to check for our complexity manipulation. We found three significantly different levels. After having completed the fatigue scales, participants are asked to view the ads on a screen, while their eye movements are recorded by the eye-tracker. Eye-tracking allows us to find out patterns of visual attention (Pieters and Warlop 1999). We are then able to infer specific respondents’ visual strategies according to their level of fatigue. Comprehension is assessed with a comprehension test. We collect measures of attitude change for persuasion and measures of recall and recognition at various points of time for memorization. Once the effect of fatigue will be determined across the student population, it is interesting to account for individual differences in fatigue severity and perception. Therefore, we run study 2, which is similar to the previous one except for the design: time of day is now within-subjects and complexity becomes between-subjects