850 resultados para Drama, theatre and performance studies


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Under the circumstances of the increasing market pressure, enterprises try to improve their competitive position by development efforts, and a business development project is one tool for that. There are not many answers to the question of how the development projects launched to improve the business performance in SMEs have succeeded. Theacademic interest in the business development project success has mainly focused on projects implemented in larger organisations rather than in SMEs. The previous studies on the business success of SMEs have mainly focused on new business ventures rather than on existing SMEs. However, nowadays a large number of business development projects are undertaken in existing SMEs, where they can pose a great challenge. This study focuses on business development success in SMEs thathave already established their business. The objective of the present study is to gain a deep understanding on business development project success in the SME-context and to identify the dimensions and factors affecting the project success. Further, the aim is to clarify how the business development projects implemented in SMEs have affected their performance. The empirical evidence is based on multiple case study. This study builds a framework for a generic theory of business development success in the SME-context, based on literature from the areas ofproject and change management, entrepreneurship and small business management, as well as performance measurement, and on empirical evidence from SMES. The framework consists of five success dimensions: entrepreneurial, project preparation, change management, project management and project success. The framework provides a systematic way for analysing the business development project and its impact on the performance and on the performing company. This case evidence indicates that successful business development projects have a balanced, high performance concerning all the dimensions. Good performance in one dimension is not enoughfor the project success, but it gives a good ground for the other dimensions. The other way round, poor performance in one success dimension affects the others, leading to poor performance of the project. In the SME-context the business development project success seems to be dependent on several interrelated dimensions and factors. Success in one area leads to success in other areas, and so creates an upward success spiral. Failure in one area seems to lead to failure in other areas, creating a downward failure spiral. The study indicates that the internal business development projects have affected the SMEs' performance widely also on areas and functions not initially targeted. The implications cover all thesuccess categories: the project efficiency, the impact on the customer, the business success and the future potentiality. With successful cases, the success tends to spread out to areas and functions not mentioned as the project goals, andwith unsuccessful cases the failure seems to spread out widely to the SMEs' other functions. This study also indicates that the most important key factors for successful business development project implementation are the strength of intention, business ability, knowledge, motivation and participation of the employees, as well as adequate and well-timed training provided to the employees.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimuksen tavoite oli selvittää yrityksen web toiminnan rakentamisen vaiheita sekä menestyksen mittaamista. Rakennusprosessia tutkittiin viisiportaisen askelmallin avulla. Mallin askeleet ovat; arviointi, strategian muotoilu, suunnitelma, pohjapiirros ja toteutus. Arviointi- ja toteutusvaiheiden täydentämiseksi sekä erityisesti myös internet toiminnan onnistumisen mittaamisen avuksi internet toiminnan hyödyt (CRM,kommunikointi-, myynti-, ja jakelukanava hyödyt markkinoinnin kannalta) käsiteltiin. Toiminnan menestyksen arvioinnin avuksi esiteltiin myös porrasmalli internet toimintaan. Porrasmalli määrittelee kauppakulissi-, dynaaminen-, transaktio- ja e-businessportaat. Tutkimuksessa löydettiin menestystekijöitä internet toimintojen menestykselle. Nämä tekijät ovat laadukas sisältö, kiinnostavuus, viihdyttävyys, informatiivisuus, ajankohtaisuus, personoitavuus, luottamus, interaktiivisuus, käytettävyys, kätevyys, lojaalisuus, suoriutuminen, responssiivisuus ja käyttäjätiedon kerääminen. Mittarit jaettiin tutkimuksessa aktiivisuus-, käyttäytymis- ja muunnosmittareihin. Lisäksi muita mittareita ja menestysindikaattoreita esiteltiin. Nämä menestyksen elementit ja mittarit koottiin yhteen uudessa internet toimintojen menestyksenarviointimallissa. Tutkielman empiirisessä osuudessa,esitettyjä teorioita peilattiin ABB:n (ABB:n sisällä erityisesti ABB Stotz-Kontakt) web toimintaan. Apuna olivat dokumenttianalyysi sekä haastattelut. Empiirinen osa havainnollisti teoriat käytännössä ja toi ilmi mahdollisuuden teorioiden laajentamiseen. Internet toimintojen rakentamismallia voidaan käyttää myös web toimintojen kehittämiseen ja porrasmalli sopii myös nykyisten internet toimintojen arvioimiseen. Mittareiden soveltaminen käytännössä toi kuitenkin ilmi tarpeen niiden kehittämiseen ja aiheen lisätutkimukseen. Niiden tulisi olla myös aiempaatiiviimmin liitetty kokonaisvaltaisen liiketoiminnan menestyksen mittaamiseen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we describe a taxonomy of task demands which distinguishes between Task Complexity, Task Condition and Task Difficulty. We then describe three theoretical claims and predictions of the Cognition Hypothesis (Robinson 2001, 2003b, 2005a) concerning the effects of task complexity on: (a) language production; (b) interaction and uptake of information available in the input to tasks; and (c) individual differences-task interactions. Finally we summarize the findings of the empirical studies in this special issue which all address one or more of these predictions and point to some directions for continuing, future research into the effects of task complexity on learning and performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dissertation is based on four articles dealing with recalcitrant lignin water purification. Lignin, a complicated substance and recalcitrant to most treatment technologies, inhibits seriously pulp and paper industry waste management. Therefore, lignin is studied, using WO as a process method for its degradation. A special attention is paid to the improvement in biodegradability and the reduction of lignin content, since they have special importance for any following biological treatment. In most cases wet oxidation is not used as a complete ' mineralization method but as a pre treatment in order to eliminate toxic components and to reduce the high level of organics produced. The combination of wet oxidation with a biological treatment can be a good option due to its effectiveness and its relatively low technology cost. The literature part gives an overview of Advanced Oxidation Processes (AOPs). A hot oxidation process, wet oxidation (WO), is investigated in detail and is the AOP process used in the research. The background and main principles of wet oxidation, its industrial applications, the combination of wet oxidation with other water treatment technologies, principal reactions in WO, and key aspects of modelling and reaction kinetics are presented. There is also given a wood composition and lignin characterization (chemical composition, structure and origin), lignin containing waters, lignin degradation and reuse possibilities, and purification practices for lignin containing waters. The aim of the research was to investigate the effect of the operating conditions of WO, such as temperature, partial pressure of oxygen, pH and initial concentration of wastewater, on the efficiency, and to enhance the process and estimate optimal conditions for WO of recalcitrant lignin waters. Two different waters are studied (a lignin water model solution and debarking water from paper industry) to give as appropriate conditions as possible. Due to the great importance of re using and minimizing the residues of industries, further research is carried out using residual ash of an Estonian power plant as a catalyst in wet oxidation of lignin-containing water. Developing a kinetic model that includes in the prediction such parameters as TOC gives the opportunity to estimate the amount of emerging inorganic substances (degradation rate of waste) and not only the decrease of COD and BOD. The degradation target compound, lignin is included into the model through its COD value (CODligning). Such a kinetic model can be valuable in developing WO treatment processes for lignin containing waters, or other wastewaters containing one or more target compounds. In the first article, wet oxidation of "pure" lignin water was investigated as a model case with the aim of degrading lignin and enhancing water biodegradability. The experiments were performed at various temperatures (110 -190°C), partial oxygen pressures (0.5 -1.5 MPa) and pH (5, 9 and 12). The experiments showed that increasing the temperature notably improved the processes efficiency. 75% lignin reduction was detected at the lowest temperature tested and lignin removal improved to 100% at 190°C. The effect of temperature on the COD removal rate was lower, but clearly detectable. 53% of organics were oxidized at 190°C. The effect of pH occurred mostly on lignin removal. Increasing the pH enhanced the lignin removal efficiency from 60% to nearly 100%. A good biodegradability ratio (over 0.5) was generally achieved. The aim of the second article was to develop a mathematical model for "pure" lignin wet oxidation using lumped characteristics of water (COD, BOD, TOC) and lignin concentration. The model agreed well with the experimental data (R2 = 0.93 at pH 5 and 12) and concentration changes during wet oxidation followed adequately the experimental results. The model also showed correctly the trend of biodegradability (BOD/COD) changes. In the third article, the purpose of the research was to estimate optimal conditions for wet oxidation (WO) of debarking water from the paper industry. The WO experiments were' performed at various temperatures, partial oxygen pressures and pH. The experiments showed that lignin degradation and organics removal are affected remarkably by temperature and pH. 78-97% lignin reduction was detected at different WO conditions. Initial pH 12 caused faster removal of tannins/lignin content; but initial pH 5 was more effective for removal of total organics, represented by COD and TOC. Most of the decrease in organic substances concentrations occurred in the first 60 minutes. The aim of the fourth article was to compare the behaviour of two reaction kinetic models, based on experiments of wet oxidation of industrial debarking water under different conditions. The simpler model took into account only the changes in COD, BOD and TOC; the advanced model was similar to the model used in the second article. Comparing the results of the models, the second model was found to be more suitable for describing the kinetics of wet oxidation of debarking water. The significance of the reactions involved was compared on the basis of the model: for instance, lignin degraded first to other chemically oxidizable compounds rather than directly to biodegradable products. Catalytic wet oxidation of lignin containing waters is briefly presented at the end of the dissertation. Two completely different catalysts were used: a commercial Pt catalyst and waste power plant ash. CWO showed good performance using 1 g/L of residual ash gave lignin removal of 86% and COD removal of 39% at 150°C (a lower temperature and pressure than with WO). It was noted that the ash catalyst caused a remarkable removal rate for lignin degradation already during the pre heating for `zero' time, 58% of lignin was degraded. In general, wet oxidation is not recommended for use as a complete mineralization method, but as a pre treatment phase to eliminate toxic or difficultly biodegradable components and to reduce the high level of organics. Biological treatment is an appropriate post treatment method since easily biodegradable organic matter remains after the WO process. The combination of wet oxidation with subsequent biological treatment can be an effective option for the treatment of lignin containing waters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

@450 wireless broadband service is Digita’s mobile wireless broadband network service. In @450 network Digita acts as the network operator offering network capacity to service operators. For Digita it is important to know what kind of services its network is capable of and what are the network’s service parameters. The knowledge of the network parameters and the behaviour can be used in advance in the development of new service products. Before a new service product can be offered to service operators a lot of work has to be done. The basic testing is necessary to get an understanding of the basic functionality. The requirement specification has to be done and a new product has to be created. The new product has to be tested. The test results have to be analysed in order to find out if the new product is suitable for real use and with which limitations. The content of this Thesis is the development of wireless technologies, @450 service and network, FLASH-OFDM technology, FLASH-OFDM performance testing and the development of a new service product.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The strength and nature of the video game practice effect on tests of visual and perceptual skills were examined using high functioning Grades Four and Five students who had been tested with the WISC-R .for the purpose of gifted identification and placement. The control group, who did not own and .play video games on a sustained basis, and the experimental group, who did own a video game system and had some mastery of video games, including the -Nintendo game, "Tetris", were each composed of 18 juniorg:r;-ade students and were chosen from pre-existing conditions. The experimental group corresponded to the control group in terms of age, sex, and community. Data on the Verbal and Performance I.Q. Scores were· collected for both groups and the author was interested in the difference between the Verbal and Performance Scores within each group, anticipating a P > V outcome for the experimental group. The results showed a significant P > V difference in the experimental, video game playing group, as expected, but no significant difference between the Performance $cores of the control and experimental groups. The results, thus, indicated lower Verbal I.Q. Scores in the experimental group relat'ive to 'the control group.' The study conclu~ed that information about a sUbject's video game experience and "learhing style pref~rence is important for a clear interpretation of the Verbal and Performance I.Q. Scores of the WISC-R. Although the time spent on video game play may, 'indeed, increase P~rformance Scores relative to Verbal Scores for an individual, the possibilities exist that the time borrowed and spent away from language based activities may retard verbal growth and/or that the cognitive style associated with some Performance I.Q.subtests may have a negative effect on the approach to the tasks on the Verbal I.Q. Scale. The study also discussed the possibility that exposure to ,the video game experience, in pre-puberty, can provide spatial instruction which will result in improved spatial skills. strong spatial skills have been linked to improved performance and preference in mathematics, science, and engineering and it was suggested that appropriate video game play might be a way to involve girls more in the fields of mathematics and science.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examined the effectiveness of motor-encoding activities on memory and performance of students in a Grade One reading program. There were two experiments in the study. Experiment 1 replicated a study by Eli Saltz and David Dixon (1982). The effect of motoric enactment (Le., pretend play) of sentences on memory for the sentences was investigated. Forty Grade One students performed a "memory-for-sentences" technique, devised by Saltz and Dixon. Only the experimental group used motoric enactment of the sentences. Although quantitative findings revealed no significant difference between the mean scores of the experimental group versus the control group, aspects of the experimental design could have affected the results. It was suggested that Saltz and Dixon's study could be replicated again, with more attention given to variables such as population size, nature of the test sentences, subjects' previous educational experience and conditions related to the testing environment. The second experiment was an application of Saltz and Dixon's theory that motoric imagery should facilitate memory for sentences. The intent was to apply this theory to Grade One students' ability to remember words from their reading program. An experimental gym program was developed using kinesthetic activities to reinforce the skills of the classroom reading program. The same subject group was used in Experiment 2. It was hypothesized that the subjects who experienced the experimental gym program would show greater signs of progress in reading ability, as evidenced by their scores on Form G of the Woodcock Reading Mastery Test--Revised. The data from the WRM--R were analyzed with a 3-way split-plot analysis of variance in which group (experimental vs. control) and sex were the between subjects variables and test-time (pre-test vs. post-test) was the within-subjects variable. Findings revealed the following: (a) both groups made substantial gains over time on the visual-auditory learning sub-test and the triple action of group x sex x time also was significant; (b) children in the experimental and control groups performed similarly on both the pre- and post-test of the letter identification test; (c) time was the only significant effect on subjects' performance on the word identification task; (d) work attack scores showed marked improvement in performance over time for both the experimenta+ and control groups; (e) passage comprehension scores indicated an improvement in performance for both groups over time. Similar to Experiment 1, it is suggested that several modifications in the experimental design could produce significant results. These factors are addressed with suggestions for further research in the area of active learning; more specifically, the effect of motor-encoding activities on memory and academic performance of children.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Imaging studies have shown reduced frontal lobe resources following total sleep deprivation (TSD). The anterior cingulate cortex (ACC) in the frontal region plays a role in performance monitoring and cognitive control; both error detection and response inhibition are impaired following sleep loss. Event-related potentials (ERPs) are an electrophysiological tool used to index the brain's response to stimuli and information processing. In the Flanker task, the error-related negativity (ERN) and error positivity (Pe) ERPs are elicited after erroneous button presses. In a Go/NoGo task, NoGo-N2 and NoGo-P3 ERPs are elicited during high conflict stimulus processing. Research investigating the impact of sleep loss on ERPs during performance monitoring is equivocal, possibly due to task differences, sample size differences and varying degrees of sleep loss. Based on the effects of sleep loss on frontal function and prior research, it was expected that the sleep deprivation group would have lower accuracy, slower reaction time and impaired remediation on performance monitoring tasks, along with attenuated and delayed stimulus- and response-locked ERPs. In the current study, 49 young adults (24 male) were screened to be healthy good sleepers and then randomly assigned to a sleep deprived (n = 24) or rested control (n = 25) group. Participants slept in the laboratory on a baseline night, followed by a second night of sleep or wake. Flanker and Go/NoGo tasks were administered in a battery at 1O:30am (i.e., 27 hours awake for the sleep deprivation group) to measure performance monitoring. On the Flanker task, the sleep deprivation group was significantly slower than controls (p's <.05), but groups did not differ on accuracy. No group differences were observed in post-error slowing, but a trend was observed for less remedial accuracy in the sleep deprived group compared to controls (p = .09), suggesting impairment in the ability to take remedial action following TSD. Delayed P300s were observed in the sleep deprived group on congruent and incongruent Flanker trials combined (p = .001). On the Go/NoGo task, the hit rate (i.e., Go accuracy) was significantly lower in the sleep deprived group compared to controls (p <.001), but no differences were found on false alarm rates (i.e., NoGo Accuracy). For the sleep deprived group, the Go-P3 was significantly smaller (p = .045) and there was a trend for a smaller NoGo-N2 compared to controls (p = .08). The ERN amplitude was reduced in the TSD group compared to controls in both the Flanker and Go/NoGo tasks. Error rate was significantly correlated with the amplitude of response-locked ERNs in control (r = -.55, p=.005) and sleep deprived groups (r = -.46, p = .021); error rate was also correlated with Pe amplitude in controls (r = .46, p=.022) and a trend was found in the sleep deprived participants (r = .39, p =. 052). An exploratory analysis showed significantly larger Pe mean amplitudes (p = .025) in the sleep deprived group compared to controls for participants who made more than 40+ errors on the Flanker task. Altered stimulus processing as indexed by delayed P3 latency during the Flanker task and smaller amplitude Go-P3s during the Go/NoGo task indicate impairment in stimulus evaluation and / or context updating during frontal lobe tasks. ERN and NoGoN2 reductions in the sleep deprived group confirm impairments in the monitoring system. These data add to a body of evidence showing that the frontal brain region is particularly vulnerable to sleep loss. Understanding the neural basis of these deficits in performance monitoring abilities is particularly important for our increasingly sleep deprived society and for safety and productivity in situations like driving and sustained operations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laser-induced damage is the principal limiting constraint in the design and operation of high-power laser systems used in fusion and other high-energy laser applications. Therefore, an understanding of the mechanisms which cause the radiation damage to the components employed in building a laser and a knowledge of the damage threshold of these materials are of great importance in designing a laser system and to operate it without appreciable degradation in performance. This thesis, even though covers three distinct problems for investigations using a dye Q-switched multimode Nd:glass laser operating at 1062 nm and emitting 25 ns (FWHM) pulses, lays its main thrust on damage threshold studies on thin films. Using the same glass laser two-photon excited fluorescence in rhodamine 6G and generation and characterisation of a carbon plasma have also been carried out. The thesis is presented in seven chapters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laser-induced damage is the principal limiting constraint in the design and operation of high-power laser systems used in fusion and other high-energy laser applications. Therefore, an understanding of the mechanisms which cause the radiation damage to the components employed in building a laser and a knowledge of the damage threshold of these materials are of great importance in designing a laser system and to operate it without appreciable degradation in performance. This thesis, even though covers three distinct problems for investigations using a dye Q-switched multimode Nd:glass laser operating at 1062 nm and emitting 25 ns (FWHM) pulses, lays its main thrust on damage threshold studies on thin films. Using the same glass laser two-photon excited fluorescence in rhodamine 6G and generation and characterisation of a carbon plasma have also been carried out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multicultural leadership is a topic a great interest in nowadays globalized work environment. Colombia emerges as an attractive marketplace with appealing business opportunities, especially for German enterprises. After presenting Colombia’s current political, social and economic situation, the thesis elaborates the complex subject of cultural differences while focusing on the peculiarities of German and Colombian national cultures. The resulting implications for a team’s collaboration and leader effectiveness are theoretically supported with reference to the landmark studies of Hofstede and GLOBE. By utilizing semi-structured interview techniques, a qualitative research enriches the previous findings and gives an all-encompassing insight in German-Colombian teamwork. The investigation identifies distinctive behavioral patterns and relations, which imply challenges and factors of success for multicultural team leaders. Finally, a categorical analysis examines the influence of cultural traits on team performance and evaluates the effectiveness of the applied leadership style.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo de esta tesis es predecir el rendimiento de los estudiantes de doctorado en la Universidad de Girona según características personales (background), actitudinales y de redes sociales de los estudiantes. La población estudiada son estudiantes de tercer y cuarto curso de doctorado y sus directores de tesis doctoral. Para obtener los datos se ha diseño un cuestionario web especificando sus ventajas y teniendo en cuenta algunos problemas tradicionales de no cobertura o no respuesta. El cuestionario web se hizo debido a la complejidad que comportan de las preguntas de red social. El cuestionario electrónico permite, mediante una serie de instrucciones, reducir el tiempo para responder y hacerlo menos cargado. Este cuestionario web, además es auto administrado, lo cual nos permite, según la literatura, unas respuestas mas honestas que cuestionario con encuestador. Se analiza la calidad de las preguntas de red social en cuestionario web para datos egocéntricos. Para eso se calcula la fiabilidad y la validez de este tipo de preguntas, por primera vez a través del modelo Multirasgo Multimétodo (Multitrait Multimethod). Al ser datos egocéntricos, se pueden considerar jerárquicos, y por primera vez se una un modelo Multirasgo Multimétodo Multinivel (multilevel Multitrait Multimethod). Las la fiabilidad y validez se pueden obtener a nivel individual (within group component) o a nivel de grupo (between group component) y se usan para llevar a cabo un meta-análisis con otras universidades europeas para analizar ciertas características de diseño del cuestionario. Estas características analizan si para preguntas de red social hechas en cuestionarios web son más fiables y validas hechas "by questions" o "by alters", si son presentes todas las etiquetas de frecuencia para los ítems o solo la del inicio y final, o si es mejor que el diseño del cuestionario esté en con color o blanco y negro. También se analiza la calidad de la red social en conjunto, en este caso específico son los grupos de investigación de la universidad. Se tratan los problemas de los datos ausentes en las redes completas. Se propone una nueva alternativa a la solución típica de la red egocéntrica o los respondientes proxies. Esta nueva alternativa la hemos nombrado "Nosduocentered Network" (red Nosduocentrada), se basa en dos actores centrales en una red. Estimando modelos de regresión, esta "Nosduocentered network" tiene mas poder predictivo para el rendimiento de los estudiantes de doctorado que la red egocéntrica. Además se corrigen las correlaciones de las variables actitudinales por atenuación debido al pequeño tamaño muestral. Finalmente, se hacen regresiones de los tres tipos de variables (background, actitudinales y de red social) y luego se combinan para analizar cual para predice mejor el rendimiento (según publicaciones académicas) de los estudiantes de doctorado. Los resultados nos llevan a predecir el rendimiento académico de los estudiantes de doctorado depende de variables personales (background) i actitudinales. Asimismo, se comparan los resultados obtenidos con otros estudios publicados.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Howard Barker is a writer who has made several notable excursions into what he calls ‘the charnel house…of European drama.’ David Ian Rabey has observed that a compelling property of these classical works lies in what he calls ‘the incompleteness of [their] prescriptions’, and Barker’s Women Beware Women (1986), Seven Lears (1990) and Gertrude: The Cry (2002), are in turn based around the gaps and interstices found in Thomas Middleton’s Women Beware Women (c1627), Shakespeare’s King Lear (c1604) and Hamlet (c1601) respectively. This extends from representing the missing queen from King Lear, who Barker observes, ‘is barely quoted even in the depths of rage or pity’, to his new ending for Middleton’s Jacobean tragedy and the erotic revivification of Hamlet’s mother. This paper will argue that each modern reappropriation accentuates a hidden but powerful feature in these Elizabethan and Jacobean plays – namely their clash between obsessive desire, sexual transgression and death against the imposed restitution of a prescribed morality. This contradiction acts as the basis for Barker’s own explorations of eroticism, death and tragedy. The paper will also discuss Barker’s project for these ‘antique texts’, one that goes beyond what he derisively calls ‘relevance’, but attempts instead to recover ‘smothered genius’, whereby the transgressive is ‘concealed within structures that lend an artificial elegance.’ Together with Barker’s own rediscovery of tragedy, the paper will assert that these rewritings of Elizabethan and Jacobean drama expose their hidden, yet unsettling and provocative ideologies concerning the relationship between political corruption / justice through the power of sexuality (notably through the allure and danger of the mature woman), and an erotics of death that produces tragedy for the contemporary age.