954 resultados para Exceed
Resumo:
Previous research suggests that changing consumer and producer knowledge structures play a role in market evolution and that the sociocognitive processes of product markets are revealed in the sensemaking stories of market actors that are rebroadcasted in commercial publications. In this article, the authors lend further support to the story-based nature of market sensemaking and the use of the sociocognitive approach in explaining the evolution of high-technology markets. They examine the content (i.e., subject matter or topic) and volume (i.e., the number) of market stories and the extent to which content and volume of market stories evolve as a technology emerges. Data were obtained from a content analysis of 10,412 article abstracts, published in key trade journals, pertaining to Local Area Network (LAN) technologies and spanning the period 1981 to 2000. Hypotheses concerning the evolving nature (content and volume) of market stories in technology evolution are tested. The analysis identified four categories of market stories - technical, product availability, product adoption, and product discontinuation. The findings show that the emerging technology passes initially through a 'technical-intensive' phase whereby technology related stories dominate, through a 'supply-push' phase, in which stories presenting products embracing the technology tend to exceed technical stories while there is a rise in the number of product adoption reference stories, to a 'product-focus' phase, with stories predominantly focusing on product availability. Overall story volume declines when a technology matures as the need for sensemaking reduces. When stories about product discontinuation surface, these signal the decline of current technology. New technologies that fail to maintain the 'product-focus' stage also reflect limited market acceptance. The article also discusses the theoretical and managerial implications of the study's findings. © 2002 Elsevier Science Inc. All rights reserved.
Resumo:
Purpose: To determine the validity of covering a corneal contact transducer probe with cling film as protection against the transmission of Creutzfeldt-Jakob disease (CJD). Methods: The anterior chamber depth, lens thickness and vitreous chamber depth of the right eyes of 10 subjects was recorded, under cycloplegia, with and without cling film covering over the transducer probe of a Storz Omega Compu-scan Biometric Ruler. Measurements were repeated on two occasions. Results: Cling film covering did not influence bias or repeatability. Although the 95% limits of agreement between measurements made with and without cling film covering tended to exceed the intrasessional repeatability, they did not exceed the intersessional repeatability of measurements taken without cling film. Conclusions: The results support the use of cling film as a disposable covering for corneal contact A-scan ultrasonography to avoid the risk of spreading CJD from one subject to another. © 2003 The College of Optometrists.
Resumo:
A local area network that can support both voice and data packets offers economic advantages due to the use of only a single network for both types of traffic, greater flexibility to changing user demands, and it also enables efficient use to be made of the transmission capacity. The latter aspect is very important in local broadcast networks where the capacity is a scarce resource, for example mobile radio. This research has examined two types of local broadcast network, these being the Ethernet-type bus local area network and a mobile radio network with a central base station. With such contention networks, medium access control (MAC) protocols are required to gain access to the channel. MAC protocols must provide efficient scheduling on the channel between the distributed population of stations who want to transmit. No access scheme can exceed the performance of a single server queue, due to the spatial distribution of the stations. Stations cannot in general form a queue without using part of the channel capacity to exchange protocol information. In this research, several medium access protocols have been examined and developed in order to increase the channel throughput compared to existing protocols. However, the established performance measures of average packet time delay and throughput cannot adequately characterise protocol performance for packet voice. Rather, the percentage of bits delivered within a given time bound becomes the relevant performance measure. Performance evaluation of the protocols has been examined using discrete event simulation and in some cases also by mathematical modelling. All the protocols use either implicit or explicit reservation schemes, with their efficiency dependent on the fact that many voice packets are generated periodically within a talkspurt. Two of the protocols are based on the existing 'Reservation Virtual Time CSMA/CD' protocol, which forms a distributed queue through implicit reservations. This protocol has been improved firstly by utilising two channels, a packet transmission channel and a packet contention channel. Packet contention is then performed in parallel with a packet transmission to increase throughput. The second protocol uses variable length packets to reduce the contention time between transmissions on a single channel. A third protocol developed, is based on contention for explicit reservations. Once a station has achieved a reservation, it maintains this effective queue position for the remainder of the talkspurt and transmits after it has sensed the transmission from the preceeding station within the queue. In the mobile radio environment, adaptions to the protocols were necessary in order that their operation was robust to signal fading. This was achieved through centralised control at a base station, unlike the local area network versions where the control was distributed at the stations. The results show an improvement in throughput compared to some previous protocols. Further work includes subjective testing to validate the protocols' effectiveness.
Resumo:
Spread spectrum systems make use of radio frequency bandwidths which far exceed the minimum bandwidth necessary to transmit the basic message information.These systems are designed to provide satisfactory communication of the message information under difficult transmission conditions. Frequency-hopped multilevel frequency shift keying (FH-MFSK) is one of the many techniques used in spread spectrum systems. It is a combination of frequency hopping and time hopping. In this system many users share a common frequency band using code division multiplexing. Each user is assigned an address and the message is modulated into the address. The receiver, knowing the address, decodes the received signal and extracts the message. This technique is suggested for digital mobile telephony. This thesis is concerned with an investigation of the possibility of utilising FH-MFSK for data transmission corrupted by additive white gaussian noise (A.W.G.N.). Work related to FH-MFSK has so far been mostly confined to its validity, and its performance in the presence of A.W.G.N. has not been reported before. An experimental system was therefore constructed which utilised combined hardware and software and operated under the supervision of a microprocessor system. The experimental system was used to develop an error-rate model for the system under investigation. The performance of FH-MFSK for data transmission was established in the presence of A.W.G.N. and with deleted and delayed sample effects. Its capability for multiuser applications was determined theoretically. The results show that FH-MFSK is a suitable technique for data transmission in the presence of A.W.G.N.
Resumo:
Objective: This study aimed to explore methods of assessing interactions between neuronal sources using MEG beamformers. However, beamformer methodology is based on the assumption of no linear long-term source interdependencies [VanVeen BD, vanDrongelen W, Yuchtman M, Suzuki A. Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans Biomed Eng 1997;44:867-80; Robinson SE, Vrba J. Functional neuroimaging by synthetic aperture magnetometry (SAM). In: Recent advances in Biomagnetism. Sendai: Tohoku University Press; 1999. p. 302-5]. Although such long-term correlations are not efficient and should not be anticipated in a healthy brain [Friston KJ. The labile brain. I. Neuronal transients and nonlinear coupling. Philos Trans R Soc Lond B Biol Sci 2000;355:215-36], transient correlations seem to underlie functional cortical coordination [Singer W. Neuronal synchrony: a versatile code for the definition of relations? Neuron 1999;49-65; Rodriguez E, George N, Lachaux J, Martinerie J, Renault B, Varela F. Perception's shadow: long-distance synchronization of human brain activity. Nature 1999;397:430-3; Bressler SL, Kelso J. Cortical coordination dynamics and cognition. Trends Cogn Sci 2001;5:26-36]. Methods: Two periodic sources were simulated and the effects of transient source correlation on the spatial and temporal performance of the MEG beamformer were examined. Subsequently, the interdependencies of the reconstructed sources were investigated using coherence and phase synchronization analysis based on Mutual Information. Finally, two interacting nonlinear systems served as neuronal sources and their phase interdependencies were studied under realistic measurement conditions. Results: Both the spatial and the temporal beamformer source reconstructions were accurate as long as the transient source correlation did not exceed 30-40 percent of the duration of beamformer analysis. In addition, the interdependencies of periodic sources were preserved by the beamformer and phase synchronization of interacting nonlinear sources could be detected. Conclusions: MEG beamformer methods in conjunction with analysis of source interdependencies could provide accurate spatial and temporal descriptions of interactions between linear and nonlinear neuronal sources. Significance: The proposed methods can be used for the study of interactions between neuronal sources. © 2005 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Resumo:
The work described in this thesis is the development of an ultrasonic tomogram to provide outlines of cross-sections of the ulna in vivo. This instrument, used in conjunction with X-ray densitometry previously developed in this department, would provide actual bone mineral density to a high resolution. It was hoped that the accuracy of the plot obtained from the tomogram would exceed that of existing ultrasonic techniques by about five times. Repeat measurements with these instruments to follow bone mineral changes would involve very low X-ray doses. A theoretical study has been made of acoustic diffraction, using a geometrical transform applicable to the integration of three different Green's functions, for axisymmetric systems. This has involved the derivation of one of these in a form amenable to computation. It is considered that this function fits the boundary conditions occurring in medical ultrasonography more closely than those used previously. A three dimensional plot of the pressure field using this function has been made for a ring transducer, in addition to that for disc transducers using all three functions. It has been shown how the theory may be extended to investigate the nature and magnitude of the particle velocity, at any point in the field, for the three functions mentioned. From this study. a concept of diffraction fronts has been developed, which has made it possible to determine energy flow also in a diffracting system. Intensity has been displayed in a manner similar to that used for pressure. Plots have been made of diffraction fronts and energy flow direction lines.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
The civil engineering industry generally regards new methods and technology with a high amount of scepticism, preferring to use traditional and trusted methods. During the 1980s competition for civil engineering consultancy work in the world has become fierce. Halcrow recognised the need to maintain and improve their competitive edge over other consultants. The use of new technology in the form of microcomputers was seen to be one method to maintain and improve their repuation in the world. This thesis examines the role of microcomputers in civil engineering consultancy with particular reference to overseas projects. The involvement of civil engineers with computers, both past and present, has been investigated and a survey of the use of microcomputers by consultancies was carried out, the results are presented and analysed. A resume of the state-of-the-art of microcomputer technology was made. Various case studies were carried out in order to examine the feasibility of using microcomputers on overseas projects. One case study involved the examination of two projects in Bangladesh and is used to illustrate the requirements and problems encountered in such situations. Two programming applications were undertaken, a dynamic programming model of a single site reservoir and the simulation of the Bangladesh gas grid system. A cost-benefit analysis of a water resources project using microcomputers in the Aguan Valley, Honduras was carried out. Although the initial cost of microcomputers is often small, the overall costs can prove to be very high and are likely to exceed the costs of traditional computer methods. A planned approach for the use of microcomputers is essential in order to reap the expected benefits and recommendations for the implementation of such an approach are presented.
Resumo:
The occurrence of spalling is a major factor in determining the fire resistance of concrete constructions. The apparently random occurrence of spalling has limited the development and application of fire resistance modelling for concrete structures. This Thesis describes an experimental investigation into the spalling of concrete on exposure to elevated temperatures. It has been shown that spalling may be categorised into four distinct types, aggregate spalling, corner spalling, surface spalling and explosive spalling. Aggregate spalling has been found to be a form of shear failure of aggregates local to the heated surface. The susceptibility of any particular concrete to aggregate spalling can be quantified from parameters which include the coefficients of thermal expansion of both the aggregate and the surrounding mortar, the size and thermal diffusivity of the aggregate and the rate of heating. Corner spalling, which is particularly significant for the fire resistance of concrete columns, is a result of concrete losing its tensile strength at elevated temperatures. Surface spalling is the result of excessive pore pressures within heated concrete. An empirical model has been developed to allow quantification of the pore pressures and a material failure model proposed. The dominant parameters are rate of heating, pore saturation and concrete permeability. Surface spalling may be alleviated by limiting pore pressure development and a number of methods to this end have been evaluated. Explosive spalling involves the catastrophic failure of a concrete element and may be caused by either of two distinct mechanisms. In the first instance, excessive pore pressures can cause explosive spalling, although the effect is limited principally to unloaded or relatively small specimens. A second cause of explosive spalling is where the superimposition of thermally induced stresses on applied load stresses exceed the concrete's strength.
Resumo:
This thesis first considers the calibration and signal processing requirements of a neuromagnetometer for the measurement of human visual function. Gradiometer calibration using straight wire grids is examined and optimal grid configurations determined, given realistic constructional tolerances. Simulations show that for gradiometer balance of 1:104 and wire spacing error of 0.25mm the achievable calibration accuracy of gain is 0.3%, of position is 0.3mm and of orientation is 0.6°. Practical results with a 19-channel 2nd-order gradiometer based system exceed this performance. The real-time application of adaptive reference noise cancellation filtering to running-average evoked response data is examined. In the steady state, the filter can be assumed to be driven by a non-stationary step input arising at epoch boundaries. Based on empirical measures of this driving step an optimal progression for the filter time constant is proposed which improves upon fixed time constant filter performance. The incorporation of the time-derivatives of the reference channels was found to improve the performance of the adaptive filtering algorithm by 15-20% for unaveraged data, falling to 5% with averaging. The thesis concludes with a neuromagnetic investigation of evoked cortical responses to chromatic and luminance grating stimuli. The global magnetic field power of evoked responses to the onset of sinusoidal gratings was shown to have distinct chromatic and luminance sensitive components. Analysis of the results, using a single equivalent current dipole model, shows that these components arise from activity within two distinct cortical locations. Co-registration of the resulting current source localisations with MRI shows a chromatically responsive area lying along the midline within the calcarine fissure, possibly extending onto the lingual and cuneal gyri. It is postulated that this area is the human homologue of the primate cortical area V4.
Resumo:
Cadmium has been widely used in various industries for the past fifty years, with current world production standing at around 16,755 tonnes per year. Very little cadmium is ever recycled and the ultimate fate of all cadmium is the environment. In view of reports that cadmium in the environment is increasing, this thesis aims to identify population groups 'at risk' of receiving dietary intakes of cadmium up to or above the current Food and Agricultural Organisation/World Health Organisation maximum tolerable intake of 70 ug/day. The study involves the investigation of one hundred households (260 individuals) who grow a large proportion of their vegetable diet in garden soils in the Borough of Walsall, part of an urban/industrial area in the United Kingdom. Measurements were made of the cadmium levels in atmospheric deposition, soil, house dust, diet and urine from the participants. Atmospheric deposition of cadmium was found to be comparable with other urban/industrial areas in the European Community, with deposition rates as high as 209 g ha-1 yr-1. The garden soils of the study households were found to contain up to 33 mg kg-1 total cadmium, eleven times the highest level usually found in agricultural soils. Dietary intakes of cadmium by the residents from food were calculated to be as high as 68 ug/day. It is suggested that with intakes from other sources, such as air, adventitious ingestion, smoking and occupational exposure, total intakes of cadmium may reach or exceed the FAO/WHO limit. Urinary excretion of cadmium amongst a non-smoking, non-occupationally exposed sub-group of the study population was found to be significantly higher than that of a similar urban population who did not rely on home-produced vegetables. The results from this research indicate that present levels of cadmium in urban/industrial areas can increase dietary intakes and body burdens of cadmium. As cadmium serves no useful biological function and has been found to be highly toxic, it is recommended that policy measures to reduce human exposure on the European scale be considered.
Resumo:
The conventional design of forming rolls depends heavily on the individual skill of roll designers which is based on intuition and knowledge gained from previous work. Roll design is normally a trial an error procedure, however with the progress of computer technology, CAD/CAM systems for the cold roll-forming industry have been developed. Generally, however, these CAD systems can only provide a flower pattern based on the knowledge obtained from previously successful flower patterns. In the production of ERW (Electric Resistance Welded) tube and pipe, the need for a theoretical simulation of the roll-forming process, which can not only predict the occurrence of the edge buckling but also obtain the optimum forming condition, has been recognised. A new simulation system named "CADFORM" has been devised that can carry out the consistent forming simulation for this tube-making process. The CADFORM system applied an elastic-plastic stress-strain analysis and evaluate edge buckling by using a simplified model of the forming process. The results can also be visualised graphically. The calculated longitudinal strain is obtained by considering the deformation of lateral elements and takes into account the reduction in strains due to the fin-pass roll. These calculated strains correspond quite well with the experimental results. Using the calculated strains, the stresses in the strip can be estimated. The addition of the fin-pass roll reduction significantly reduces the longitudinal compressive stress and therefore effectively suppresses edge buckling. If the calculated longitudinal stress is controlled, by altering the forming flower pattern so it does not exceed the buckling stress within the material, then the occurrence of edge buckling can be avoided. CADFORM predicts the occurrence of edge buckling of the strip in tube-making and uses this information to suggest an appropriate flower pattern and forming conditions which will suppress the occurrence of the edge buckling.
Resumo:
Huge advertising budgets are invested by firms to reach and convince potential consumers to buy their products. To optimize these investments, it is fundamental not only to ensure that appropriate consumers will be reached, but also that they will be in appropriate reception conditions. Marketing research has focused on the way consumers react to advertising, as well as on some individual and contextual factors that could mediate or moderate the ad impact on consumers (e.g. motivation and ability to process information or attitudes toward advertising). Nevertheless, a factor that potentially influences consumers’ advertising reactions has not yet been studied in marketing research: fatigue. Fatigue can yet impact key variables of advertising processing, such as cognitive resources availability (Lieury 2004). Fatigue is felt when the body warns to stop an activity (or inactivity) to have some rest, allowing the individual to compensate for fatigue effects. Dittner et al. (2004) defines it as “the state of weariness following a period of exertion, mental or physical, characterized by a decreased capacity for work and reduced efficiency to respond to stimuli.’’ It signals that resources will lack if we continue with the ongoing activity. According to Schmidtke (1969), fatigue leads to troubles in information reception, in perception, in coordination, in attention getting, in concentration and in thinking. In addition, for Markle (1984) fatigue generates a decrease in memory, and in communication ability, whereas it increases time reaction, and number of errors. Thus, fatigue may have large effects on advertising processing. We suggest that fatigue determines the level of available resources. Some research about consumer responses to advertising claim that complexity is a fundamental element to take into consideration. Complexity determines the cognitive efforts the consumer must provide to understand the message (Putrevu et al. 2004). Thus, we suggest that complexity determines the level of required resources. To study this complex question about need and provision of cognitive resources, we draw upon Resource Matching Theory. Anand and Sternthal (1989, 1990) are the first to state the Resource Matching principle, saying that an ad is most persuasive when the resources required to process it match the resources the viewer is willing and able to provide. They show that when the required resources exceed those available, the message is not entirely processed by the consumer. And when there are too many available resources comparing to those required, the viewer elaborates critical or unrelated thoughts. According to the Resource Matching theory, the level of resource demanded by an ad can be high or low, and is mostly determined by the ad’s layout (Peracchio and Myers-Levy, 1997). We manipulate the level of required resources using three levels of ad complexity (low – high – extremely high). On the other side, the resource availability of an ad viewer is determined by lots of contextual and individual variables. We manipulate the level of available resources using two levels of fatigue (low – high). Tired viewers want to limit the processing effort to minimal resource requirements by making heuristics, forming overall impression at first glance. It will be easier for them to decode the message when ads are very simple. On the contrary, the most effective ads for viewers who are not tired are complex enough to draw their attention and fully use their resources. They will use more analytical strategies, looking at the details of the ad. However, if ads are too complex, they will be too difficult to understand. The viewer will be discouraged to process information and will overlook the ad. The objective of our research is to study fatigue as a moderating variable of advertising information processing. We run two experimental studies to assess the effect of fatigue on visual strategies, comprehension, persuasion and memorization. In study 1, thirty-five undergraduate students enrolled in a marketing research course participated in the experiment. The experimental design is 2 (tiredness level: between subjects) x 3 (ad complexity level: within subjects). Participants were randomly assigned a schedule time (morning: 8-10 am or evening: 10-12 pm) to perform the experiment. We chose to test subjects at various moments of the day to obtain maximum variance in their fatigue level. We use Morningness / Eveningness tendency of participants (Horne & Ostberg, 1976) as a control variable. We assess fatigue level using subjective measures - questionnaire with fatigue scales - and objective measures - reaction time and number of errors. Regarding complexity levels, we have designed our own ads in order to keep aspects other than complexity equal. We ran a pretest using the Resource Demands scale (Keller and Bloch 1997) and by rating them on complexity like Morrison and Dainoff (1972) to check for our complexity manipulation. We found three significantly different levels. After having completed the fatigue scales, participants are asked to view the ads on a screen, while their eye movements are recorded by the eye-tracker. Eye-tracking allows us to find out patterns of visual attention (Pieters and Warlop 1999). We are then able to infer specific respondents’ visual strategies according to their level of fatigue. Comprehension is assessed with a comprehension test. We collect measures of attitude change for persuasion and measures of recall and recognition at various points of time for memorization. Once the effect of fatigue will be determined across the student population, it is interesting to account for individual differences in fatigue severity and perception. Therefore, we run study 2, which is similar to the previous one except for the design: time of day is now within-subjects and complexity becomes between-subjects
Resumo:
When two solutions differing in solute concentration are separated by a porous membrane, the osmotic pressure will generate a net volume flux of the suspending fluid across the membrane; this is termed osmotic flow. We consider the osmotic flow across a membrane with circular cylindrical pores when the solute and the pore walls are electrically charged, and the suspending fluid is an electrolytic solution containing small cations and anions. Under the condition in which the radius of the pores and that of the solute molecules greatly exceed those of the solvent as well as the ions, a fluid mechanical and electrostatic theory is introduced to describe the osmotic flow in the presence of electric charge. The interaction energy, including the electrostatic interaction between the solute and the pore wall, plays a key role in determining the osmotic flow. We examine the electrostatic effect on the osmotic flow and discuss the difference in the interaction energy determined from the nonlinear Poisson-Boltzmann equation and from its linearized equation (the Debye-Hückel equation).
Resumo:
Biorefineries are expected to play a major role in a future low carbon economy and substantial investments are being made to support this vision. However, it is important to consider the wider socio-economic impacts of such a transition. This paper quantifies the potential trade, employment and land impacts of economically viable European biorefinery options based on indigenous straw and wood feedstocks. It illustrates how there could be potential for 70-80 European biorefineries, but not hundreds. A single facility could generate tens of thousands of man-years of employment and employment creation per unit of feedstock is higher than for biomass power plants. However, contribution to national GDP is unlikely to exceed 1% in European member states, although contributions to national agricultural productivity may be more significant, particularly with straw feedstocks. There is also a risk that biorefinery development could result in reduced rates of straw incorporation into soil, raising concerns that economically rational decisions to sell rather than reincorporate straw could result in increased agricultural land-use or greenhouse gas emissions. © 2013.