935 resultados para Neuropathy - Experimental studies
Resumo:
The effect of low energy nitrogen molecular ion beam bombardment on metals and compound semiconductors has been studied, with the aim to investigate at the effects of ion and target properties. For this purpose, nitrogen ion implantation in aluminium, iron, copper, gold, GaAs and AIGaAs is studied using XPS and Angle Resolve XPS. A series of experimental studies on N+2 bombardment induced compositional changes, especially the amount of nitrogen retained in the target, were accomplished. Both monoenergetic implantation and non-monoenergetic ion implantation were investigated, using the VG Scientific ESCALAB 200D system and a d. c. plasma cell, respectively. When the samples, with the exception of gold, are exposed to air, native oxide layers are formed on the surfaces. In the case of monoenergetic implantation, the surfaces were cleaned using Ar+ beam bombardment prior to implantation. The materials were then bombarded with N2+ beam and eight sets of successful experiments were performed on each sample, using a rastered N2+ ion beam of energy of 2, 3, 4 and 5 keV with current densities of 1 μA/cm2 and 5 μA/cm22 for each energy. The bombarded samples were examined by ARXPS. After each complete implantation, XPS depth profiles were created using Ar+ beam at energy 2 ke V and current density 2 μA/cm2 . As the current density was chosen as one of the parameters, accurate determination of current density was very important. In the case of glow discharge, two sets of successful experiments were performed in each case, by exposing the samples to nitrogen plasma for the two conditions: at low pressure and high voltage and high pressure and low voltage. These samples were then examined by ARXPS. On the theoretical side, the major problem was prediction of the number of ions of an element that can be implanted in a given matrix. Although the programme is essentially on experimental study, but an attempt is being made to understand the current theoretical models, such as SATVAL, SUSPRE and TRIM. The experimental results were compared with theoretical predictions, in order to gain a better understanding of the mechanisms responsible. From the experimental results, considering possible experimental uncertainties, there is no evidence of significant variation in nitrogen saturation concentration with ion energy or ion current density in the range of 2-5 ke V, however, the retention characteristics of implantant seem to strongly depend on the chemical reactivity between ion species and target material. The experimental data suggests the presence of at least one thermal process. The discrepancy between the theoretical and experimental results could be the inability of the codes to account for molecular ion impact and thermal processes.
Resumo:
Some critical aspects of a new kind of on-line measurement technique for micro and nanoscale surface measurements are described. This attempts to use spatial light-wave scanning to replace mechanical stylus scanning, and an optical fibre interferometer to replace optically bulky interferometers for measuring the surfaces. The basic principle is based on measuring the phase shift of a reflected optical signal. Wavelength-division-multiplexing and fibre Bragg grating techniques are used to carry out wavelength-to-field transformation and phase-to-depth detection, allowing a large dynamic measurement ratio (range/resolution) and high signal-to-noise ratio with remote access. In effect the paper consists of two parts: multiplexed fibre interferometry and remote on-machine surface detection sensor (an optical dispersive probe). This paper aims to investigate the metrology properties of a multiplexed fibre interferometer and to verify its feasibility by both theoretical and experimental studies. Two types of optical probes, using a dispersive prism and a blazed grating, respectively, are introduced to realize wavelength-to-spatial scanning.
Resumo:
Despite the multiplicity of approaches and techniques so far applied for identifying the pathophysiological mechanisms of photosensitive epilepsy, a generally agreed explanation of the phenomenon is still lacking. The present thesis reports on three interlinked original experimental studies conducted to explore the neurophysiological correlates and the phatophysiological mechanism of photosensitive epilepsy. In the first study I assessed the role of the habituation of the Visual Evoked Response test as a possible biomarker of epileptic visual sensitivity. The two subsequent studies were designed to address specific research questions emerging from the results of the first study. The findings of the three intertwined studies performed provide experimental evidence that photosensitivity is associated with changes in a number of electrophysiological measures suggestive of altered balance between excitatory and inhibitory cortical processes. Although a strong clinical association does exist between specific epileptic syndromes and visual sensitivity, results from this research indicate that photosensitivity trait seems to be the expression of specific pathophysiological mechanisms quite distinct from the “epileptic” phenotype. The habituation of Pattern Reversal Visual Evoked Potential (PR-VEP) appears as a reliable candidate endo-phenotype of visual sensitivity. Interpreting the findings of this study in the context of the broader literature on visual habituation we can hypothesise the existence of a shared neurophysiological background between photosensitive epilepsy and migraine. Future studies to elucidate the relationship between the proposed indices of cortical excitability and specific polymorphisms of excitatroy and inhibitory neurotransmission will need to be conducted to assess their potential role as biomarkers of photosensitivity.
Resumo:
This thesis attempts a psychological investigation of hemispheric functioning in developmental dyslexia. Previous work using neuropsychological methods with developmental dyslexics is reviewed ,and original work is presented both of a conventional psychometric nature and also utilising a new means of intervention. At the inception of inquiry into dyslexia, comparisons were drawn between developmental dyslexia and acquired alexia, promoting a model of brain damage as the common cause. Subsequent investigators found developmental dyslexics to be neurologically intact, and so an alternative hypothesis was offered, namely that language is abnormally localized (not in the left hemisphere). Research in the last decade, using the advanced techniques of modern neuropsychology, has indicated that developmental dyslexics are probably left hemisphere dominant for language. The development of a new type of pharmaceutical prep~ration (that appears to have a left hemisphere effect) offers an oppertunity to test the experimental hypothesis. This hypothesis propounds that most dyslexics are left hemisphere language dominant, but some of these language related operations are dysfunctioning. The methods utilised are those of psychological assessment of cognitive function, both in a traditional psychometric situation, and with a new form of intervention (Piracetam). The information resulting from intervention will be judged on its therapeutic validity and contribution to the understanding of hemispheric functioning in dyslexics. The experimental studies using conventional psychometric evaluation revealed a dyslexic profile of poor sequencing and name coding ability, with adequate spatial and verbal reasoning skills. Neuropsychological information would tend to suggest that this profile was indicative of adequate right hemsiphere abilities and deficits in some left hemsiphere abilities. When an intervention agent (Piracetam) was used with young adult dyslexics there were improvements in both the rate of acquisition and conservation of verbal learning. An experimental study with dyslexic children revealed that Piracetam appeared to improve reading, writing and sequencing, but did not influence spatial abilities. This would seem to concord with other recent findings, that deve~mental dyslexics may have left hemisphere language localisation, although some of these language related abilities are dysfunctioning.
Resumo:
The adsorption of nonionic surface active agents of polyoxyethylene glycol monoethers of n hexadecanols on polystyrene latex and nonionic cellulose polymers of hydroxyethyl cellulose, hydroxypropyl cellulose and hydroxypropyl methylcellulose on polystyrene latex and ibuprofen drug particles have been studied. The adsorbed layer thicknesses were determined by means of microelectrophoretic and viscometric methods. The conformation of the adsorbed molecules at the solid-liquid interface was deduced from the molecular areas and the adsorbed layer thicknesses. Comparison of the adsorption results obtained from polystyrene latex and ibuprofen particles was made to explain the conformation difference between these two adsorbates. Sedimentation volumes and redispersibility values were the main criteria used to evaluate suspension stability. At low concentrations of surface active agents, hard caked suspensions were found, probably due to the attraction between the uncoated areas or, the mutual adsorption of the adsorbed molecules on the bare surface of the particles in the sediment. At high concentrations of hydroxypropyl cellulose and hydroxypropyl methylcellulose, heavily caked sediments were attributed to network structure formation by the adsorbed molecules. An attempt was made to relate the characteristics of the suspensions to the potential energy of interaction curves. Generally, the agreement between theory and experiment was good, but for hydroxyethyl cellulose-ibuprofen systems discrepancies were found. Experimental studies showed that hydroxyethyl cellulose flocculated polystyrene latex over a rather wide range of concentrations; similarly, hydroxyethyl cellulose-ibuprofen suspensions were also flocculated. Therefore, it ls suggested that a term to account for flocculation energy of the polymer should be added to the total energy of interaction. A rheometric method was employed to study the flocculation energy of the polymer.
Resumo:
This thesis proposes that despite many experimental studies of thinking, and the development of models of thinking, such as Bruner's (1966) enactive, iconic and symbolic developmental modes, the imagery and inner verbal strategies used by children need further investigation to establish a coherent, theoretical basis from which to create experimental curricula for direct improvement of those strategies. Five hundred and twenty-three first, second and third year comprehensive school children were tested on 'recall' imagery, using a modified Betts Imagery Test; and a test of dual-coding processes (Paivio, 1971, p.179), by the P/W Visual/Verbal Questionnaire, measuring 'applied imagery' and inner verbalising. Three lines of investigation were pursued: 1. An investigation a. of hypothetical representational strategy differences between boys and girls; and b. the extent to which strategies change with increasing age. 2. The second and third year children's use of representational processes, were taken separately and compared with performance measures of perception, field independence, creativity, self-sufficiency and self-concept. 3. The second and third year children were categorised into four dual-coding strategy groups: a. High Visual/High Verbal b. Low Visual/High Verbal c. High Visual/Low Verbal d. Low Visual/Low Verbal These groups were compared on the same performance measures. The main result indicates that: 1. A hierarchy of dual-coding strategy use can be identified that is significantly related (.01, Binomial Test) to success or failure in the performance measures: the High Visual/High Verbal group registering the highest scores, the Low Visual/High Verbal and High Visual/Low Verbal groups registering intermediate scores, and the Low Visual/Low Verbal group registering the lowest scores on the performance measures. Subsidiary results indicate that: 2. Boys' use of visual strategies declines, and of verbal strategies increases, with age; girls' recall imagery strategy increases with age. Educational implications from the main result are discussed, the establishment of experimental curricula proposed, and further research suggested.
Resumo:
We present experimental studies and numerical modeling based on a combination of the Bidirectional Beam Propagation Method and Finite Element Modeling that completely describes the wavelength spectra of point by point femtosecond laser inscribed fiber Bragg gratings, showing excellent agreement with experiment. We have investigated the dependence of different spectral parameters such as insertion loss, all dominant cladding and ghost modes and their shape relative to the position of the fiber Bragg grating in the core of the fiber. Our model is validated by comparing model predictions with experimental data and allows for predictive modeling of the gratings. We expand our analysis to more complicated structures, where we introduce symmetry breaking; this highlights the importance of centered gratings and how maintaining symmetry contributes to the overall spectral quality of the inscribed Bragg gratings. Finally, the numerical modeling is applied to superstructure gratings and a comparison with experimental results reveals a capability for dealing with complex grating structures that can be designed with particular wavelength characteristics.
Resumo:
Huge advertising budgets are invested by firms to reach and convince potential consumers to buy their products. To optimize these investments, it is fundamental not only to ensure that appropriate consumers will be reached, but also that they will be in appropriate reception conditions. Marketing research has focused on the way consumers react to advertising, as well as on some individual and contextual factors that could mediate or moderate the ad impact on consumers (e.g. motivation and ability to process information or attitudes toward advertising). Nevertheless, a factor that potentially influences consumers’ advertising reactions has not yet been studied in marketing research: fatigue. Fatigue can yet impact key variables of advertising processing, such as cognitive resources availability (Lieury 2004). Fatigue is felt when the body warns to stop an activity (or inactivity) to have some rest, allowing the individual to compensate for fatigue effects. Dittner et al. (2004) defines it as “the state of weariness following a period of exertion, mental or physical, characterized by a decreased capacity for work and reduced efficiency to respond to stimuli.’’ It signals that resources will lack if we continue with the ongoing activity. According to Schmidtke (1969), fatigue leads to troubles in information reception, in perception, in coordination, in attention getting, in concentration and in thinking. In addition, for Markle (1984) fatigue generates a decrease in memory, and in communication ability, whereas it increases time reaction, and number of errors. Thus, fatigue may have large effects on advertising processing. We suggest that fatigue determines the level of available resources. Some research about consumer responses to advertising claim that complexity is a fundamental element to take into consideration. Complexity determines the cognitive efforts the consumer must provide to understand the message (Putrevu et al. 2004). Thus, we suggest that complexity determines the level of required resources. To study this complex question about need and provision of cognitive resources, we draw upon Resource Matching Theory. Anand and Sternthal (1989, 1990) are the first to state the Resource Matching principle, saying that an ad is most persuasive when the resources required to process it match the resources the viewer is willing and able to provide. They show that when the required resources exceed those available, the message is not entirely processed by the consumer. And when there are too many available resources comparing to those required, the viewer elaborates critical or unrelated thoughts. According to the Resource Matching theory, the level of resource demanded by an ad can be high or low, and is mostly determined by the ad’s layout (Peracchio and Myers-Levy, 1997). We manipulate the level of required resources using three levels of ad complexity (low – high – extremely high). On the other side, the resource availability of an ad viewer is determined by lots of contextual and individual variables. We manipulate the level of available resources using two levels of fatigue (low – high). Tired viewers want to limit the processing effort to minimal resource requirements by making heuristics, forming overall impression at first glance. It will be easier for them to decode the message when ads are very simple. On the contrary, the most effective ads for viewers who are not tired are complex enough to draw their attention and fully use their resources. They will use more analytical strategies, looking at the details of the ad. However, if ads are too complex, they will be too difficult to understand. The viewer will be discouraged to process information and will overlook the ad. The objective of our research is to study fatigue as a moderating variable of advertising information processing. We run two experimental studies to assess the effect of fatigue on visual strategies, comprehension, persuasion and memorization. In study 1, thirty-five undergraduate students enrolled in a marketing research course participated in the experiment. The experimental design is 2 (tiredness level: between subjects) x 3 (ad complexity level: within subjects). Participants were randomly assigned a schedule time (morning: 8-10 am or evening: 10-12 pm) to perform the experiment. We chose to test subjects at various moments of the day to obtain maximum variance in their fatigue level. We use Morningness / Eveningness tendency of participants (Horne & Ostberg, 1976) as a control variable. We assess fatigue level using subjective measures - questionnaire with fatigue scales - and objective measures - reaction time and number of errors. Regarding complexity levels, we have designed our own ads in order to keep aspects other than complexity equal. We ran a pretest using the Resource Demands scale (Keller and Bloch 1997) and by rating them on complexity like Morrison and Dainoff (1972) to check for our complexity manipulation. We found three significantly different levels. After having completed the fatigue scales, participants are asked to view the ads on a screen, while their eye movements are recorded by the eye-tracker. Eye-tracking allows us to find out patterns of visual attention (Pieters and Warlop 1999). We are then able to infer specific respondents’ visual strategies according to their level of fatigue. Comprehension is assessed with a comprehension test. We collect measures of attitude change for persuasion and measures of recall and recognition at various points of time for memorization. Once the effect of fatigue will be determined across the student population, it is interesting to account for individual differences in fatigue severity and perception. Therefore, we run study 2, which is similar to the previous one except for the design: time of day is now within-subjects and complexity becomes between-subjects
Resumo:
We review our recent progress on the study of new nonlinear mechanisms of pulse shaping in passively mode-locked fibre lasers. These include a mode-locking regime featuring pulses with a triangular distribution of the intensity, and spectral compression arising from nonlinear pulse propagation. We also report on our recent experimental studies unveiling new families of vector solitons with precessing states of polarization for multipulsing and bound-state soliton operations in a carbon nanotube mode-locked fibre laser with anomalous dispersion cavity. © 2013 IEEE.
Resumo:
We review our recent progress on the study of new nonlinear mechanisms of pulse shaping in passively mode-locked fibre lasers. These include a mode-locking regime featuring pulses with a triangular distribution of the intensity, and spectral compression arising from nonlinear pulse propagation. We also report on our recent experimental studies unveiling new families of vector solitons with precessing states of polarization for multipulsing and bound-state soliton operations in a carbon nanotube mode-locked fibre laser with anomalous dispersion cavity. © 2013 IEEE.
Resumo:
Some critical aspects of a new kind of on-line measurement technique for micro and nanoscale surface measurements are described. This attempts to use spatial light-wave scanning to replace mechanical stylus scanning, and an optical fibre interferometer to replace optically bulky interferometers for measuring the surfaces. The basic principle is based on measuring the phase shift of a reflected optical signal. Wavelength-division-multiplexing and fibre Bragg grating techniques are used to carry out wavelength-to-field transformation and phase-to-depth detection, allowing a large dynamic measurement ratio (range/resolution) and high signal-to-noise ratio with remote access. In effect the paper consists of two parts: multiplexed fibre interferometry and remote on-machine surface detection sensor (an optical dispersive probe). This paper aims to investigate the metrology properties of a multiplexed fibre interferometer and to verify its feasibility by both theoretical and experimental studies. Two types of optical probes, using a dispersive prism and a blazed grating, respectively, are introduced to realize wavelength-to-spatial scanning.
Resumo:
We review our recent progress on the study of new nonlinear mechanisms of pulse shaping in passively mode-locked fiber lasers. These include a mode-locking regime featuring pulses with a triangular distribution of the intensity, and spectral compression arising from nonlinear pulse propagation. We also report on our recent experimental studies unveiling new types of vector solitons with processing states of polarization for multi-pulse and tightly bound-state soliton (soliton molecule) operations in a carbon nanotube (CNT) mode-locked fiber laser with anomalous dispersion cavity. © 2014 World Scientific Publishing Company.
Resumo:
The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with "negative absorption" of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors-random distributed feedback fibre laser-was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100km. Although an effective reflection due to the Rayleigh scattering is extremely small (~0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the generation of a stationary near-Gaussian beam with a narrow spectrum. A random distributed feedback fibre laser has efficiency and performance that are comparable to and even exceed those of similar conventional fibre lasers. The key features of the generated radiation of random distributed feedback fibre lasers include: a stationary narrow-band continuous modeless spectrum that is free of mode competition, nonlinear power broadening, and an output beam with a Gaussian profile in the fundamental transverse mode (generated both in single mode and multi-mode fibres).This review presents the current status of research in the field of random fibre lasers and shows their potential and perspectives. We start with an introductory overview of conventional distributed feedback lasers and traditional random lasers to set the stage for discussion of random fibre lasers. We then present a theoretical analysis and experimental studies of various random fibre laser configurations, including widely tunable, multi-wavelength, narrow-band generation, and random fibre lasers operating in different spectral bands in the 1-1.6μm range. Then we discuss existing and future applications of random fibre lasers, including telecommunication and distributed long reach sensor systems. A theoretical description of random lasers is very challenging and is strongly linked with the theory of disordered systems and kinetic theory. We outline two key models governing the generation of random fibre lasers: the average power balance model and the nonlinear Schrödinger equation based model. Recently invented random distributed feedback fibre lasers represent a new and exciting field of research that brings together such diverse areas of science as laser physics, the theory of disordered systems, fibre optics and nonlinear science. Stable random generation in optical fibre opens up new possibilities for research on wave transport and localization in disordered media. We hope that this review will provide background information for research in various fields and will stimulate cross-disciplinary collaborations on random fibre lasers. © 2014 Elsevier B.V.
Resumo:
Transportation service operators are witnessing a growing demand for bi-directional movement of goods. Given this, the following thesis considers an extension to the vehicle routing problem (VRP) known as the delivery and pickup transportation problem (DPP), where delivery and pickup demands may occupy the same route. The problem is formulated here as the vehicle routing problem with simultaneous delivery and pickup (VRPSDP), which requires the concurrent service of the demands at the customer location. This formulation provides the greatest opportunity for cost savings for both the service provider and recipient. The aims of this research are to propose a new theoretical design to solve the multi-objective VRPSDP, provide software support for the suggested design and validate the method through a set of experiments. A new real-life based multi-objective VRPSDP is studied here, which requires the minimisation of the often conflicting objectives: operated vehicle fleet size, total routing distance and the maximum variation between route distances (workload variation). The former two objectives are commonly encountered in the domain and the latter is introduced here because it is essential for real-life routing problems. The VRPSDP is defined as a hard combinatorial optimisation problem, therefore an approximation method, Simultaneous Delivery and Pickup method (SDPmethod) is proposed to solve it. The SDPmethod consists of three phases. The first phase constructs a set of diverse partial solutions, where one is expected to form part of the near-optimal solution. The second phase determines assignment possibilities for each sub-problem. The third phase solves the sub-problems using a parallel genetic algorithm. The suggested genetic algorithm is improved by the introduction of a set of tools: genetic operator switching mechanism via diversity thresholds, accuracy analysis tool and a new fitness evaluation mechanism. This three phase method is proposed to address the shortcoming that exists in the domain, where an initial solution is built only then to be completely dismantled and redesigned in the optimisation phase. In addition, a new routing heuristic, RouteAlg, is proposed to solve the VRPSDP sub-problem, the travelling salesman problem with simultaneous delivery and pickup (TSPSDP). The experimental studies are conducted using the well known benchmark Salhi and Nagy (1999) test problems, where the SDPmethod and RouteAlg solutions are compared with the prominent works in the VRPSDP domain. The SDPmethod has demonstrated to be an effective method for solving the multi-objective VRPSDP and the RouteAlg for the TSPSDP.
Resumo:
Dynamic Optimization Problems (DOPs) have been widely studied using Evolutionary Algorithms (EAs). Yet, a clear and rigorous definition of DOPs is lacking in the Evolutionary Dynamic Optimization (EDO) community. In this paper, we propose a unified definition of DOPs based on the idea of multiple-decision-making discussed in the Reinforcement Learning (RL) community. We draw a connection between EDO and RL by arguing that both of them are studying DOPs according to our definition of DOPs. We point out that existing EDO or RL research has been mainly focused on some types of DOPs. A conceptualized benchmark problem, which is aimed at the systematic study of various DOPs, is then developed. Some interesting experimental studies on the benchmark reveal that EDO and RL methods are specialized in certain types of DOPs and more importantly new algorithms for DOPs can be developed by combining the strength of both EDO and RL methods.