854 resultados para Simulator of Performance in Error
Resumo:
Impairment of cognitive performance during and after high-altitude climbing has been described in numerous studies and has mostly been attributed to cerebral hypoxia and resulting functional and structural cerebral alterations. To investigate the hypothesis that high-altitude climbing leads to cognitive impairment, we used of neuropsychological tests and measurements of eye movement (EM) performance during different stimulus conditions. The study was conducted in 32 mountaineers participating in an expedition to Muztagh Ata (7,546 m). Neuropsychological tests comprised figural fluency, line bisection, letter and number cancellation, and a modified pegboard task. Saccadic performance was evaluated under three stimulus conditions with varying degrees of cortical involvement: visually guided pro- and anti-saccades, and visuo-visual interaction. Typical saccade parameters (latency, mean sequence, post-saccadic stability, and error rate) were computed off-line. Measurements were taken at a baseline level of 440 m and at altitudes of 4,497, 5,533, 6,265, and again at 440 m. All subjects reached 5,533 m, and 28 reached 6,265 m. The neuropsychological test results did not reveal any cognitive impairment. Complete eye movement recordings for all stimulus conditions were obtained in 24 subjects at baseline and at least two altitudes and in 10 subjects at baseline and all altitudes. Measurements of saccade performances showed no dependence on any altitude-related parameter and were well within normal limits. Our data indicates that acclimatized climbers do not seem to suffer from significant cognitive deficits during or after climbs to altitudes above 7,500 m. We demonstrated that investigation of EMs is feasible during high-altitude expeditions.
Resumo:
OBJECTIVE To test whether sleep-deprived, healthy subjects who do not always signal spontaneously perceived sleepiness (SPS) before falling asleep during the Maintenance of Wakefulness Test (MWT) would do so in a driving simulator. METHODS Twenty-four healthy subjects (20-26 years old) underwent a MWT for 40 min and a driving simulator test for 1 h, before and after one night of sleep deprivation. Standard electroencephalography, electrooculography, submental electromyography, and face videography were recorded simultaneously to score wakefulness and sleep. Subjects were instructed to signal SPS as soon as they subjectively felt sleepy and to try to stay awake for as long as possible in every test. They were rewarded for both "appropriate" perception of SPS and staying awake for as long as possible. RESULTS After sleep deprivation, seven subjects (29%) did not signal SPS before falling asleep in the MWT, but all subjects signalled SPS before falling asleep in the driving simulator (p <0.004). CONCLUSIONS The previous results of an "inaccurate" SPS in the MWT were confirmed, and a perfect SPS was shown in the driving simulator. It was hypothesised that SPS is more accurate for tasks involving continuous feedback of performance, such as driving, compared to the less active situation of the MWT. Spontaneously perceived sleepiness in the MWT cannot be used to judge sleepiness perception while driving. Further studies are needed to define the accuracy of SPS in working tasks or occupations with minimal or no performance feedback.
Resumo:
Due to the fact that a metro network market is very cost sensitive, direct modulated schemes appear attractive. In this paper a CWDM (Coarse Wavelength Division Multiplexing) system is studied in detail by means of an Optical Communication System Design Software; a detailed study of the modulated current shape (exponential, sine and gaussian) for 2.5 Gb/s CWDM Metropolitan Area Networks is performed to evaluate its tolerance to linear impairments such as signal-to-noise-ratio degradation and dispersion. Point-to-point links are investigated and optimum design parameters are obtained. Through extensive sets of simulation results, it is shown that some of these shape pulses are more tolerant to dispersion when compared with conventional gaussian shape pulses. In order to achieve a low Bit Error Rate (BER), different types of optical transmitters are considered including strongly adiabatic and transient chirp dominated Directly Modulated Lasers (DMLs). We have used fibers with different dispersion characteristics, showing that the system performance depends, strongly, on the chosen DML?fiber couple.
Resumo:
The performance of Gallager's error-correcting code is investigated via methods of statistical physics. In this method, the transmitted codeword comprises products of the original message bits selected by two randomly-constructed sparse matrices; the number of non-zero row/column elements in these matrices constitutes a family of codes. We show that Shannon's channel capacity is saturated for many of the codes while slightly lower performance is obtained for others which may be of higher practical relevance. Decoding aspects are considered by employing the TAP approach which is identical to the commonly used belief-propagation-based decoding.
Resumo:
Manufacturing firms are driven by competitive pressures to continually improve the effectiveness and efficiency of their organisations. For this reason, manufacturing engineers often implement changes to existing processes, or design new production facilities, with the expectation of making further gains in manufacturing system performance. This thesis relates to how the likely outcome of this type of decision should be predicted prior to its implementation. The thesis argues that since manufacturing systems must also interact with many other parts of an organisation, the expected performance improvements can often be significantly hampered by constraints that arise elsewhere in the business. As a result, decision-makers should attempt to predict just how well a proposed design will perform when these other factors, or 'support departments', are taken into consideration. However, the thesis also demonstrates that, in practice, where quantitative analysis is used to evaluate design decisions, the analysis model invariably ignores the potential impact of support functions on a system's overall performance. A more comprehensive modelling approach is therefore required. A study of how various business functions interact establishes that to properly represent the kind of delays that give rise to support department constraints, a model should actually portray the dynamic and stochastic behaviour of entities in both the manufacturing and non-manufacturing aspects of a business. This implies that computer simulation be used to model design decisions but current simulation software does not provide a sufficient range of functionality to enable the behaviour of all of these entities to be represented in this way. The main objective of the research has therefore been the development of a new simulator that will overcome limitations of existing software and so enable decision-makers to conduct a more holistic evaluation of design decisions. It is argued that the application of object-oriented techniques offers a potentially better way of fulfilling both the functional and ease-of-use issues relating to development of the new simulator. An object-oriented analysis and design of the system, called WBS/Office, are therefore presented that extends to modelling a firm's administrative and other support activities in the context of the manufacturing system design process. A particularly novel feature of the design is the ability for decision-makers to model how a firm's specific information and document processing requirements might hamper shop-floor performance. The simulator is primarily intended for modelling make-to-order batch manufacturing systems and the thesis presents example models created using a working version of WBS/Office that demonstrate the feasibility of using the system to analyse manufacturing system designs in this way.
Resumo:
We demonstrate an accurate BER estimation method for QPSK CO-OFDM transmission based on the probability density function of the received QPSK symbols. Using a 112Gbs QPSK CO-OFDM transmission as an example, we show that this method offers the most accurate estimate of the system's performance in comparison with other known approaches.
Resumo:
In the last few years there has been a great development of techniques like quantum computers and quantum communication systems, due to their huge potentialities and the growing number of applications. However, physical qubits experience a lot of nonidealities, like measurement errors and decoherence, that generate failures in the quantum computation. This work shows how it is possible to exploit concepts from classical information in order to realize quantum error-correcting codes, adding some redundancy qubits. In particular, the threshold theorem states that it is possible to lower the percentage of failures in the decoding at will, if the physical error rate is below a given accuracy threshold. The focus will be on codes belonging to the family of the topological codes, like toric, planar and XZZX surface codes. Firstly, they will be compared from a theoretical point of view, in order to show their advantages and disadvantages. The algorithms behind the minimum perfect matching decoder, the most popular for such codes, will be presented. The last section will be dedicated to the analysis of the performances of these topological codes with different error channel models, showing interesting results. In particular, while the error correction capability of surface codes decreases in presence of biased errors, XZZX codes own some intrinsic symmetries that allow them to improve their performances if one kind of error occurs more frequently than the others.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Nickel-based super alloys are used in a variety of applications in which high-temperature strength and resistance to creep, corrosion, and oxidation are required, such as in aircraft gas turbines, combustion chambers, and automotive engine valves. The properties that make these materials suitable for these applications also make them difficult to grind. Grinding systems for such materials are often built around vitrified cBN (cubic boron nitride) wheels to realize maximum productivity and minimum cost per part. Conditions that yield the most economical combination of stock removal rate and wheel wear are key to the successful implementation of the grinding system. Identifying the transition point for excessive wheel wear is important. The aim of this study is to compare the performance of different cBN wheels when grinding difficult-to-grind (DTG) materials by determining the 'wheel wear characteristic curve', which correlates the G-ratio to the calculated tangential force per abrasive grain. With the proposed methodology, a threshold force per grit above which the wheel wear rate increases rapidly can be quickly identified. A comparison of performance for two abrasive product formulations in the grinding of three materials is presented. The obtained results can be applied for the development of grinding applications for DTG materials.
Resumo:
In 2003-2004, several food items were purchased from large commercial outlets in Coimbra, Portugal. Such items included meats (chicken, pork, beef), eggs, rice, beans and vegetables (tomato, carrot, potato, cabbage, broccoli, lettuce). Elemental analysis was carried out through INAA at the Technological and Nuclear Institute (ITN, Portugal), the Nuclear Energy Centre for Agriculture (CENA, Brazil), and the Nuclear Engineering Teaching Lab of the University of Texas at Austin (NETL, USA). At the latter two, INAA was also associated to Compton suppression. It can be concluded that by applying Compton suppression (1) the detection limits for arsenic, copper and potassium improved; (2) the counting-statistics error for molybdenum diminished; and (3) the long-lived zinc had its 1115-keV photopeak better defined. In general, the improvement sought by introducing Compton suppression in foodstuff analysis was not significant. Lettuce, cabbage and chicken (liver, stomach, heart) are the richest diets in terms of human nutrients.
Resumo:
Self controlling practice implies a process of decision making which suggests that the options in a self controlled practice condition could affect learners The number of task components with no fixed position in a movement sequence may affect the (Nay learners self control their practice A 200 cm coincident timing track with 90 light emitting diodes (LEDs)-the first and the last LEDs being the warning and the target lights respectively was set so that the apparent speed of the light along the track was 1 33 m/sec Participants were required to touch six sensors sequentially the last one coincidently with the lighting of the tar get light (timing task) Group 1 (n=55) had only one constraint and were instructed to touch the sensors in any order except for the last sensor which had to be the one positioned close to the target light Group 2 (n=53) had three constraints the first two and the last sensor to be touched Both groups practiced the task until timing error was less than 30 msec on three consecutive trials There were no statistically significant differences between groups in the number of trials needed to reach the performance criterion but (a) participants in Group 2 created fewer sequences corn pared to Group 1 and (b) were more likely to use the same sequence throughout the learning process The number of options for a movement sequence affected the way learners self-controlled their practice but had no effect on the amount of practice to reach criterion performance.
Resumo:
The objective of the present study was to verify if active recovery (AR) applied after a judo match resulted in a better performance when compared to passive recovery (PR) in three tasks varying in specificity to the judo and in measurement of work performed: four upper-body Wingate tests (WT); special judo fitness test (SJFT); another match. For this purpose, three studies were conducted. Sixteen highly trained judo athletes took part in study 1, 9 in study 2, and 12 in study 3. During AR judokas ran (15 min) at the velocity corresponding to 70% of 4 mmol l(-1) blood lactate intensity (similar to 50% (V) over dotO(2) peak), while during PR they stayed seated at the competition area. The results indicated that the minimal recovery time reported in judo competitions (15 min) is long enough for sufficient recovery of WT performance and in a specific high-intensity test (SJFT). However, the odds ratio of winning a match increased ten times when a judoka performed AR and his opponent performed PR, but the cause of this phenomenon cannot be explained by changes in number of actions performed or by changes in match`s time structure.
Resumo:
This article presents a tool for the allocation analysis of complex systems of water resources, called AcquaNetXL, developed in the form of spreadsheet in which a model of linear optimization and another nonlinear were incorporated. The AcquaNetXL keeps the concepts and attributes of a decision support system. In other words, it straightens out the communication between the user and the computer, facilitates the understanding and the formulation of the problem, the interpretation of the results and it also gives a support in the process of decision making, turning it into a clear and organized process. The performance of the algorithms used for solving the problems of water allocation was satisfactory especially for the linear model.
Resumo:
High-density polyethylene resins have increasingly been used in the production of pipes for water- and gas-pressurized distribution systems and are expected to remain in service for several years, but they eventually fail prematurely by creep fracture. Usual standard methods used to rank resins in terms of their resistance to fracture are expensive and non-practical for quality control purposes, justifying the search for alternative methods. Essential work of fracture (EWF) method provides a relatively simple procedure to characterize the fracture behavior of ductile polymers, such as polyethylene resins. In the present work, six resins were analyzed using the EWF methodology. The results show that the plastic work dissipation factor, beta w(p), is the most reliable parameter to evaluate the performance. Attention must be given to specimen preparation that might result in excessive dispersion in the results, especially for the essential work of fracture w(e).
Resumo:
Inulin was used as a prebiotic to improve the quality and consistency of skim milk fermented by co-cultures and pure Cultures of Lactobacillus acidophilus, Lactobacillus rhamnosus, Lactobacillus bulgaricus and Bifidobacterium lactis with Streptococcus thermophilus. We compared, either in the presence or absence of 4 g inulin/100 g, the results of the main kinetic parameters, specifically the generation time (t(g)), the maximum acidification rate (V(max)). and the times to reach V(max) (t(max)), to attain pH 5.0 (t(pH5.0)) and to complete the fermentation (t(pH4.5)). Post-acidification, lactic acid formation and cell counts were also determined and compared, either 1 day after the fermentation was complete or after 7 day storage at 4 degrees C. In general, inulin addition to the milk increased in co-cultures V(max), decreased t(max), t(g) and t(pH4.5), favored post-acidification, exerted a bifidogenic effect, and preserved almost intact cell viability during storage. In addition, S. thermophilus was shown to stimulate the metabolism of the other lactic bacteria. Contrary to co-cultures, most of the effects in pure Cultures were not statistically significant. The most important aspect of this paper is the use of the generation time as a toot to investigate the microbial response to inulin addition. (c) 2009 Elsevier Ltd. All rights reserved.