814 resultados para Optical detection systems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differential Phase Shift Keying (DPSK) modulation format has been shown as a robust solution for next-generation optical transmission systems. One key device enabling such systems is a delay interferometer, converting the phase modulation signal into the intensity modulation signal to be detected by the photodiodes. Usually, a standard Mach-Zehnder interferometer (MZI) is used for demodulating a DPSK signal. In this paper, we develop an MZI which is based on all-fiber Multimode Interference (MI) structure: a multimode fiber (MMF) with a central dip, located between two single-mode fibers (SMFs) without any transition zones. The MI based MZI (MI-MZI) is more stable than the standard MZI as the two arms share the same MMF, reducing the impact of the external effects, such as temperature and others. Performance of this MI-MZI is analyzed theoretically and experimentally from transmission spectrum. Experimental results shows that high interference extinction ratio is obtained, which is far higher than that obtained from a normal graded-index based MI-MZI. Finally, by software simulation, we demonstrate that our proposed MI-MZI can be used for demodulating a 40 Gbps DPSK signal. The performance of the MI-MZI based DPSK receiver is analyzed from the sensitivity. Simulation results show that sensitivity of the proposed receiver is about -22.3 dBm for a BER of 10-15 and about -23.8 dBm for a BER of 10-9.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La iluminación con diodos emisores de luz (LED) está reemplazando cada vez en mayor medida a las fuentes de luz tradicionales. La iluminación LED ofrece ventajas en eficiencia, consumo de energía, diseño, tamaño y calidad de la luz. Durante más de 50 años, los investigadores han estado trabajando en mejoras LED. Su principal relevancia para la iluminación está aumentando rápidamente. Esta tesis se centra en un campo de aplicación importante, como son los focos. Se utilizan para enfocar la luz en áreas definidas, en objetos sobresalientes en condiciones profesionales. Esta iluminación de alto rendimiento requiere una calidad de luz definida, que incluya temperaturas ajustables de color correlacionadas (CCT), de alto índice de reproducción cromática (CRI), altas eficiencias, y colores vivos y brillantes. En el paquete LED varios chips de diferentes colores (rojo, azul, fósforo convertido) se combinan para cumplir con la distribución de energía espectral con alto CRI. Para colimar la luz en los puntos concretos deseados con un ángulo de emisión determinado, se utilizan blancos sintonizables y diversos colores de luz y ópticas secundarias. La combinación de una fuente LED de varios colores con elementos ópticos puede causar falta de homogeneidad cromática en la distribución espacial y angular de la luz, que debe resolverse en el diseño óptico. Sin embargo, no hay necesidad de uniformidad perfecta en el punto de luz debido al umbral en la percepción visual del ojo humano. Por lo tanto, se requiere una descripción matemática del nivel de uniformidad del color con respecto a la percepción visual. Esta tesis está organizada en siete capítulos. Después de un capítulo inicial que presenta la motivación que ha guiado la investigación de esta tesis, en el capítulo 2 se presentan los fundamentos científicos de la uniformidad del color en luces concentradas, como son: el espacio de color aplicado CIELAB, la percepción visual del color, los fundamentos de diseño de focos respecto a los motores de luz y ópticas no formadoras de imágenes, y los últimos avances en la evaluación de la uniformidad del color en el campo de los focos. El capítulo 3 desarrolla diferentes métodos para la descripción matemática de la distribución espacial del color en un área definida, como son la diferencia de color máxima, la desviación media del color, el gradiente de la distribución espacial de color, así como la suavidad radial y axial. Cada función se refiere a los diferentes factores que influyen en la visión, los cuales necesitan un tratamiento distinto que el de los datos que se tendrán en cuenta, además de funciones de ponderación que pre- y post-procesan los datos simulados o medidos para la reducción del ruido, la luminancia de corte, la aplicación de la ponderación de luminancia, la función de sensibilidad de contraste, y la función de distribución acumulativa. En el capítulo 4, se obtiene la función de mérito Usl para la estimación de la uniformidad del color percibida en focos. Se basó en los resultados de dos conjuntos de experimentos con factor humano realizados para evaluar la percepción visual de los sujetos de los patrones de focos típicos. El primer experimento con factor humano dio lugar al orden de importancia percibida de los focos. El orden de rango percibido se utilizó para correlacionar las descripciones matemáticas de las funciones básicas y la función ponderada sobre la distribución espacial del color, que condujo a la función Usl. El segundo experimento con factor humano probó la percepción de los focos bajo condiciones ambientales diversas, con el objetivo de proporcionar una escala absoluta para Usl, para poder así sustituir la opinión subjetiva personal de los individuos por una función de mérito estandarizada. La validación de la función Usl se presenta en relación con el alcance de la aplicación y condiciones, así como las limitaciones y restricciones que se realizan en el capítulo 5. Se compararon los datos medidos y simulados de varios sistemas ópticos. Se discuten los campos de aplicación , así como validaciones y restricciones de la función. El capítulo 6 presenta el diseño del sistema de focos y su optimización. Una evaluación muestra el análisis de sistemas basados en el reflector y la lente TIR. Los sistemas ópticos simulados se comparan en la uniformidad del color Usl, sensibilidad a las sombras coloreadas, eficiencia e intensidad luminosa máxima. Se ha comprobado que no hay un sistema único que obtenga los mejores resultados en todas las categorías, y que una excelente uniformidad de color se pudo alcanzar por la conjunción de dos sistemas diferentes. Finalmente, el capítulo 7 presenta el resumen de esta tesis y la perspectiva para investigar otros aspectos. ABSTRACT Illumination with light-emitting diodes (LED) is more and more replacing traditional light sources. They provide advantages in efficiency, energy consumption, design, size and light quality. For more than 50 years, researchers have been working on LED improvements. Their main relevance for illumination is rapidly increasing. This thesis is focused on one important field of application which are spotlights. They are used to focus light on defined areas, outstanding objects in professional conditions. This high performance illumination required a defined light quality including tunable correlated color temperatures (CCT), high color rendering index (CRI), high efficiencies and bright, vivid colors. Several differently colored chips (red, blue, phosphor converted) in the LED package are combined to meet spectral power distribution with high CRI, tunable white and several light colors and secondary optics are used to collimate the light into the desired narrow spots with defined angle of emission. The combination of multi-color LED source and optical elements may cause chromatic inhomogeneities in spatial and angular light distribution which needs to solved at the optical design. However, there is no need for perfect uniformity in the spot light due to threshold in visual perception of human eye. Therefore, a mathematical description of color uniformity level with regard to visual perception is required. This thesis is organized seven seven chapters. After an initial one presenting the motivation that has guided the research of this thesis, Chapter 2 introduces the scientific basics of color uniformity in spot lights including: the applied color space CIELAB, the visual color perception, the spotlight design fundamentals with regards to light engines and nonimaging optics, and the state of the art for the evaluation of color uniformity in the far field of spotlights. Chapter 3 develops different methods for mathematical description of spatial color distribution in a defined area, which are the maximum color difference, the average color deviation, the gradient of spatial color distribution as well as the radial and axial smoothness. Each function refers to different visual influencing factors, and they need different handling of data be taken into account, along with weighting functions which pre- and post-process the simulated or measured data for noise reduction, luminance cutoff, the implementation of luminance weighting, contrast sensitivity function, and cumulative distribution function. In chapter 4, the merit function Usl for the estimation of the perceived color uniformity in spotlights is derived. It was based on the results of two sets of human factor experiments performed to evaluate the visual perception of typical spotlight patterns by subjects. The first human factor experiment resulted in the perceived rank order of the spotlights. The perceived rank order was used to correlate the mathematical descriptions of basic functions and weighted function concerning the spatial color distribution, which lead to the Usl function. The second human factor experiment tested the perception of spotlights under varied environmental conditions, with to objective to provide an absolute scale for Usl, so the subjective personal opinion of individuals could be replaced by a standardized merit function. The validation of the Usl function is presented concerning the application range and conditions as well as limitations and restrictions in carried out in chapter 5. Measured and simulated data of various optical several systems were compared. Fields of applications are discussed as well as validations and restrictions of the function. Chapter 6 presents spotlight system design and their optimization. An evaluation shows the analysis of reflector-based and TIR lens systems. The simulated optical systems are compared in color uniformity Usl , sensitivity to colored shadows, efficiency, and peak luminous intensity. It has been found that no single system which performed best in all categories, and that excellent color uniformity could be reached by two different system assemblies. Finally, chapter 7 summarizes the conclusions of the present thesis and an outlook for further investigation topics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Histograms of Oriented Gradients (HoGs) provide excellent results in object detection and verification. However, their demanding processing requirements bound their applicability in some critical real-time scenarios, such as for video-based on-board vehicle detection systems. In this work, an efficient HOG configuration for pose-based on-board vehicle verification is proposed, which alleviates both the processing requirements and required feature vector length without reducing classification performance. The impact on classification of some critical configuration and processing parameters is in depth analyzed to propose a baseline efficient descriptor. Based on the analysis of its cells contribution to classification, new view-dependent cell-configuration patterns are proposed, resulting in reduced descriptors which provide an excellent balance between performance and computational requirements, rendering higher verification rates than other works in the literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los ataques a redes de información son cada vez más sofisticados y exigen una constante evolución y mejora de las técnicas de detección. Para ello, en este proyecto se ha diseñado e implementado una plataforma cooperativa para la detección de intrusiones basada en red. En primer lugar, se ha realizado un estudio teórico previo del marco tecnológico relacionado con este ámbito, en el que se describe y caracteriza el software que se utiliza para realizar ataques a sistemas (malware) así como los métodos que se utilizan para llegar a transmitir ese software (vectores de ataque). En el documento también se describen los llamados APT, que son ataques dirigidos con una gran inversión económica y temporal. Estos pueden englobar todos los malware y vectores de ataque existentes. Para poder evitar estos ataques, se estudiarán los sistemas de detección y prevención de intrusiones, describiendo brevemente los algoritmos que se tienden a utilizar en la actualidad. En segundo lugar, se ha planteado y desarrollado una plataforma en red dedicada al análisis de paquetes y conexiones para detectar posibles intrusiones. Este sistema está orientado a sistemas SCADA (Supervisory Control And Data Adquisition) aunque funciona sobre cualquier red IPv4/IPv6, para ello se definirá previamente lo que es un sistema SCADA, así como sus partes principales. Para implementar el sistema se han utilizado dispositivos de bajo consumo llamados Raspberry PI, estos se ubican entre la red y el equipo final que se quiera analizar. En ellos se ejecutan 2 aplicaciones desarrolladas de tipo cliente-servidor (la Raspberry central ejecutará la aplicación servidora y las esclavas la aplicación cliente) que funcionan de forma cooperativa utilizando la tecnología distribuida de Hadoop, la cual se explica previamente. Mediante esta tecnología se consigue desarrollar un sistema completamente escalable. La aplicación servidora muestra una interfaz gráfica que permite administrar la plataforma de análisis de forma centralizada, pudiendo ver así las alarmas de cada dispositivo y calificando cada paquete según su peligrosidad. El algoritmo desarrollado en la aplicación calcula el ratio de paquetes/tiempo que entran/salen del equipo final, procesando los paquetes y analizándolos teniendo en cuenta la información de señalización, creando diferentes bases de datos que irán mejorando la robustez del sistema, reduciendo así la posibilidad de ataques externos. Para concluir, el proyecto inicial incluía el procesamiento en la nube de la aplicación principal, pudiendo administrar así varias infraestructuras concurrentemente, aunque debido al trabajo extra necesario se ha dejado preparado el sistema para poder implementar esta funcionalidad. En el caso experimental actual el procesamiento de la aplicación servidora se realiza en la Raspberry principal, creando un sistema escalable, rápido y tolerante a fallos. ABSTRACT. The attacks to networks of information are increasingly sophisticated and demand a constant evolution and improvement of the technologies of detection. For this project it is developed and implemented a cooperative platform for detect intrusions based on networking. First, there has been a previous theoretical study of technological framework related to this area, which describes the software used for attacks on systems (malware) as well as the methods used in order to transmit this software (attack vectors). In this document it is described the APT, which are attacks directed with a big economic and time inversion. These can contain all existing malware and attack vectors. To prevent these attacks, intrusion detection systems and prevention intrusion systems will be discussed, describing previously the algorithms tend to use today. Secondly, a platform for analyzing network packets has been proposed and developed to detect possible intrusions in SCADA (Supervisory Control And Data Adquisition) systems. This platform is designed for SCADA systems (Supervisory Control And Data Acquisition) but works on any IPv4 / IPv6 network. Previously, it is defined what a SCADA system is and the main parts of it. To implement it, we used low-power devices called Raspberry PI, these are located between the network and the final device to analyze it. In these Raspberry run two applications client-server developed (the central Raspberry runs the server application and the slaves the client application) that work cooperatively using Hadoop distributed technology, which is previously explained. Using this technology is achieved develop a fully scalable system. The server application displays a graphical interface to manage analytics platform centrally, thereby we can see each device alarms and qualifying each packet by dangerousness. The algorithm developed in the application calculates the ratio of packets/time entering/leaving the terminal device, processing the packets and analyzing the signaling information of each packet, reating different databases that will improve the system, thereby reducing the possibility of external attacks. In conclusion, the initial project included cloud computing of the main application, being able to manage multiple concurrent infrastructure, but due to the extra work required has been made ready the system to implement this funcionality. In the current test case the server application processing is made on the main Raspberry, creating a scalable, fast and fault-tolerant system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stream-mining approach is defined as a set of cutting-edge techniques designed to process streams of data in real time, in order to extract knowledge. In the particular case of classification, stream-mining has to adapt its behaviour to the volatile underlying data distributions, what has been called concept drift. Moreover, it is important to note that concept drift may lead to situations where predictive models become invalid and have therefore to be updated to represent the actual concepts that data poses. In this context, there is a specific type of concept drift, known as recurrent concept drift, where the concepts represented by data have already appeared in the past. In those cases the learning process could be saved or at least minimized by applying a previously trained model. This could be extremely useful in ubiquitous environments that are characterized by the existence of resource constrained devices. To deal with the aforementioned scenario, meta-models can be used in the process of enhancing the drift detection mechanisms used by data stream algorithms, by representing and predicting when the change will occur. There are some real-world situations where a concept reappears, as in the case of intrusion detection systems (IDS), where the same incidents or an adaptation of them usually reappear over time. In these environments the early prediction of drift by means of a better knowledge of past models can help to anticipate to the change, thus improving efficiency of the model regarding the training instances needed. By means of using meta-models as a recurrent drift detection mechanism, the ability to share concepts representations among different data mining processes is open. That kind of exchanges could improve the accuracy of the resultant local model as such model may benefit from patterns similar to the local concept that were observed in other scenarios, but not yet locally. This would also improve the efficiency of training instances used during the classification process, as long as the exchange of models would aid in the application of already trained recurrent models, that have been previously seen by any of the collaborative devices. Which it is to say that the scope of recurrence detection and representation is broaden. In fact the detection, representation and exchange of concept drift patterns would be extremely useful for the law enforcement activities fighting against cyber crime. Being the information exchange one of the main pillars of cooperation, national units would benefit from the experience and knowledge gained by third parties. Moreover, in the specific scope of critical infrastructures protection it is crucial to count with information exchange mechanisms, both from a strategical and technical scope. The exchange of concept drift detection schemes in cyber security environments would aid in the process of preventing, detecting and effectively responding to threads in cyber space. Furthermore, as a complement of meta-models, a mechanism to assess the similarity between classification models is also needed when dealing with recurrent concepts. In this context, when reusing a previously trained model a rough comparison between concepts is usually made, applying boolean logic. The introduction of fuzzy logic comparisons between models could lead to a better efficient reuse of previously seen concepts, by applying not just equal models, but also similar ones. This work faces the aforementioned open issues by means of: the MMPRec system, that integrates a meta-model mechanism and a fuzzy similarity function; a collaborative environment to share meta-models between different devices; a recurrent drift generator that allows to test the usefulness of recurrent drift systems, as it is the case of MMPRec. Moreover, this thesis presents an experimental validation of the proposed contributions using synthetic and real datasets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is a significant clinical need to identify novel ligands with high selectivity and potency for GABA(A), GABA(C) and glycine receptor Cl- channels. Two recently developed, yellow fluorescent protein variants (YFP-I152L and YFP-V163S) are highly sensitive to quench by small anions and are thus suited to reporting anionic influx into cells. The aim of this study was to establish the optimal conditions for using these constructs for high-throughput screening of GABA(A), GABA(C) and glycine receptors transiently expressed in HEK293 cells. We found that a 70% fluorescence reduction was achieved by quenching YFP-I152L with a 10 s influx of I- ions, driven by an extemal I- concentration of at least 50 mM. The fluorescence quench was rapid, with a mean time constant of 3 s. These responses were similar for all anion receptor types studied. We also show the assay is sufficiently sensitive to measure agonist and antagonist concentration-responses using either imaging- or photomultiplier-based detection systems. The robustness, sensitivity and low cost of this assay render it suited for high-throughput screening of transiently expressed anionic ligand-gated channels. (c) 2005 Elsevier Ireland Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A novel direct integration technique of the Manakov-PMD equation for the simulation of polarisation mode dispersion (PMD) in optical communication systems is demonstrated and shown to be numerically as efficient as the commonly used coarse-step method. The main advantage of using a direct integration of the Manakov-PMD equation over the coarse-step method is a higher accuracy of the PMD model. The new algorithm uses precomputed M(w) matrices to increase the computational speed compared to a full integration without loss of accuracy. The simulation results for the probability distribution function (PDF) of the differential group delay (DGD) and the autocorrelation function (ACF) of the polarisation dispersion vector for varying numbers of precomputed M(w) matrices are compared to analytical models and results from the coarse-step method. It is shown that the coarse-step method achieves a significantly inferior reproduction of the statistical properties of PMD in optical fibres compared to a direct integration of the Manakov-PMD equation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a theory of coherent propagation and energy or power transfer in a low-dimension array of coupled nonlinear waveguides. It is demonstrated that in the array with nonequal cores (e.g., with the central core) stable steady-state coherent multicore propagation is possible only in the nonlinear regime, with a power-controlled phase matching. The developed theory of energy or power transfer in nonlinear discrete systems is rather generic and has a range of potential applications including both high-power fiber lasers and ultrahigh-capacity optical communication systems. © 2012 American Physical Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose and demonstrate novel virtual Gires–Tournois (GT) etalons based on fiber gratings. By introducing an additional phase modulation in wideband linearly chirped fiber Bragg gratings, we have successfully generated GT resonance with only one grating. This technique can simplify the fabrication procedure while retaining the normal advantages of distributed etalons, including their full compatibility with optical fiber, low insertion loss, and low cost. Such etalons can be used as dispersion compensation devices in optical transmission systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents the results of numerical modelling of ultra high-speed transmission using DM solitons. The theory of propagation in optical fibres is presented with specific reference to optical communication systems. This theory is then expanded to. incorporate dispersion-managed transmission and the dispersion managed soliton. The first part of this work focuses on ultra high-speed dispersion managed soliton propagation in short period dispersion maps. Initially, the cbaracteristics .of dispersion managed soliton propagation in short period dispersion maps are contrasted to those of the more conventional dispersion managed regime. These properties are then utilised to investigate transmission at single channel data rates of 80 Gbit/s, 160 Gbit/s and 320 Gbit/s. For all three data rates, the tolerable limits for transmission over 1000 km, 3000 km and·transoceanic distances are defined. A major limitation of these higher bjt rate systems arises from the problem of noise-induced interactions, which is where the.accumulation of timing jitter causes neighbouring dispersion-managed solitons to interact. In addition, the systems become more sensitive to initial conditions as the data rate increases, .. The second part of the work focuses on contrasting the performance of a range of propagation regimes, from quasi-linear through to soliton-like propagation at 40 Gbit/s for both single channel and WDM dispersion managed transmission. The results indicated that whilst the optimal single channel performance was achieved for soliton-like propagation, the optimal WDM performance was achieved for propagation regime that lay between quasi-linear and soliton-like.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents a theoretical investigation on applications of Raman effect in optical fibre communication as well as the design and optimisation of various Raman based devices and transmission schemes. The techniques used are mainly based on numerical modelling. The results presented in this thesis are divided into three main parts. First, novel designs of Raman fibre lasers (RFLs) based on Phosphosilicate core fibre are analysed and optimised for efficiency by using a discrete power balance model. The designs include a two stage RFL based on Phosphosilicate core fibre for telecommunication applications, a composite RFL for the 1.6 μm spectral window, and a multiple output wavelength RFL aimed to be used as a compact pump source for fiat gain Raman amplifiers. The use of Phosphosilicate core fibre is proven to effectively reduce the design complexity and hence leads to a better efficiency, stability and potentially lower cost. Second, the generalised Raman amplified gain model approach based on the power balance analysis and direct numerical simulation is developed. The approach can be used to effectively simulate optical transmission systems with distributed Raman amplification. Last, the potential employment of a hybrid amplification scheme, which is a combination between a distributed Raman amplifier and Erbium doped amplifier, is investigated by using the generalised Raman amplified gain model. The analysis focuses on the use of the scheme to upgrade a standard fibre network to 40 Gb/s system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis contains the results of experimental and numerical simulations of optical transmission systems using dispersion managed transmission techniques. Theoretical background is given on the propagation of pulses in optical fibres before extending the arguments to optical solitons, their applications and uses in communications. Dispersion management for transmission systems is introduced and then a brief explanation of quasi-linear pulse propagation is given. Techniques for performing laboratory transmission experiments are divulged and focus on the construction and operation of a recirculating loop. Laser sources and modulators for 40Gbit/s transmission rates are discussed and techniques for acquiring information from the resultant eye are explained.The operation of optically time division demultiplexing with a nonlinear elecro-absorption modulator is considered and then is replaced by the used of a linear electro-optic modulator and Dispersion unbalanced loop mirror (DILM). The use of nonlinearity as a positive effect for the use of processing and regenerating optical data is approached with an insight into the operation interferometers. Successful experimental results are given for the characterisation of the DILM and 40Gbit/ to l0Gbit/s demultiplexing is demonstrated.Modelling of a terrestrial style system is performed and the methods for computer simulation are discussed. The simulations model single channel 40Gbit/s transmission, 16 x 40Gbit/s WDM transmission and WDM transmission with varying channel separation. Three modulation formats are examined over the single mode fibre span. It is found that the dispersion managed soliton is not suitable for terrestrial style systems and that return-to-zero was the optimum format for the considered system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optical fiber materials exhibit a nonlinear response to strong electric fields, such as those of optical signals confined within the small fiber core. Fiber nonlinearity is an essential component in the design of the next generation of advanced optical communication systems, but its use is often avoided by engineers because of its intractability. The application of nonlinear technologies in fiber optics offers new opportunities for the design of photonic systems and devices. In this chapter, we make an overview of recent progress in mathematical theory and practical applications of temporal dissipative solitons and self-similar nonlinear structures in optical fiber systems. The design of all-optical high-speed signal processing devices, based on nonlinear dissipative structures, is discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The concept of plagiarism is not uncommonly associated with the concept of intellectual property, both for historical and legal reasons: the approach to the ownership of ‘moral’, nonmaterial goods has evolved to the right to individual property, and consequently a need was raised to establish a legal framework to cope with the infringement of those rights. The solution to plagiarism therefore falls most often under two categories: ethical and legal. On the ethical side, education and intercultural studies have addressed plagiarism critically, not only as a means to improve academic ethics policies (PlagiarismAdvice.org, 2008), but mainly to demonstrate that if anything the concept of plagiarism is far from being universal (Howard & Robillard, 2008). Even if differently, Howard (1995) and Scollon (1994, 1995) argued, and Angèlil-Carter (2000) and Pecorari (2008) later emphasised that the concept of plagiarism cannot be studied on the grounds that one definition is clearly understandable by everyone. Scollon (1994, 1995), for example, claimed that authorship attribution is particularly a problem in non-native writing in English, and so did Pecorari (2008) in her comprehensive analysis of academic plagiarism. If among higher education students plagiarism is often a problem of literacy, with prior, conflicting social discourses that may interfere with academic discourse, as Angèlil-Carter (2000) demonstrates, we then have to aver that a distinction should be made between intentional and inadvertent plagiarism: plagiarism should be prosecuted when intentional, but if it is part of the learning process and results from the plagiarist’s unfamiliarity with the text or topic it should be considered ‘positive plagiarism’ (Howard, 1995: 796) and hence not an offense. Determining the intention behind the instances of plagiarism therefore determines the nature of the disciplinary action adopted. Unfortunately, in order to demonstrate the intention to deceive and charge students with accusations of plagiarism, teachers necessarily have to position themselves as ‘plagiarism police’, although it has been argued otherwise (Robillard, 2008). Practice demonstrates that in their daily activities teachers will find themselves being required a command of investigative skills and tools that they most often lack. We thus claim that the ‘intention to deceive’ cannot inevitably be dissociated from plagiarism as a legal issue, even if Garner (2009) asserts that generally plagiarism is immoral but not illegal, and Goldstein (2003) makes the same severance. However, these claims, and the claim that only cases of copyright infringement tend to go to court, have recently been challenged, mainly by forensic linguists, who have been actively involved in cases of plagiarism. Turell (2008), for instance, demonstrated that plagiarism is often connoted with an illegal appropriation of ideas. Previously, she (Turell, 2004) had demonstrated by comparison of four translations of Shakespeare’s Julius Caesar to Spanish that the use of linguistic evidence is able to demonstrate instances of plagiarism. This challenge is also reinforced by practice in international organisations, such as the IEEE, to whom plagiarism potentially has ‘severe ethical and legal consequences’ (IEEE, 2006: 57). What plagiarism definitions used by publishers and organisations have in common – and which the academia usually lacks – is their focus on the legal nature. We speculate that this is due to the relation they intentionally establish with copyright laws, whereas in education the focus tends to shift from the legal to the ethical aspects. However, the number of plagiarism cases taken to court is very small, and jurisprudence is still being developed on the topic. In countries within the Civil Law tradition, Turell (2008) claims, (forensic) linguists are seldom called upon as expert witnesses in cases of plagiarism, either because plagiarists are rarely taken to court or because there is little tradition of accepting linguistic evidence. In spite of the investigative and evidential potential of forensic linguistics to demonstrate the plagiarist’s intention or otherwise, this potential is restricted by the ability to identify a text as being suspect of plagiarism. In an era with such a massive textual production, ‘policing’ plagiarism thus becomes an extraordinarily difficult task without the assistance of plagiarism detection systems. Although plagiarism detection has attracted the attention of computer engineers and software developers for years, a lot of research is still needed. Given the investigative nature of academic plagiarism, plagiarism detection has of necessity to consider not only concepts of education and computational linguistics, but also forensic linguistics. Especially, if intended to counter claims of being a ‘simplistic response’ (Robillard & Howard, 2008). In this paper, we use a corpus of essays written by university students who were accused of plagiarism, to demonstrate that a forensic linguistic analysis of improper paraphrasing in suspect texts has the potential to identify and provide evidence of intention. A linguistic analysis of the corpus texts shows that the plagiarist acts on the paradigmatic axis to replace relevant lexical items with a related word from the same semantic field, i.e. a synonym, a subordinate, a superordinate, etc. In other words, relevant lexical items were replaced with related, but not identical, ones. Additionally, the analysis demonstrates that the word order is often changed intentionally to disguise the borrowing. On the other hand, the linguistic analysis of linking and explanatory verbs (i.e. referencing verbs) and prepositions shows that these have the potential to discriminate instances of ‘patchwriting’ and instances of plagiarism. This research demonstrates that the referencing verbs are borrowed from the original in an attempt to construct the new text cohesively when the plagiarism is inadvertent, and that the plagiarist has made an effort to prevent the reader from identifying the text as plagiarism, when it is intentional. In some of these cases, the referencing elements prove being able to identify direct quotations and thus ‘betray’ and denounce plagiarism. Finally, we demonstrate that a forensic linguistic analysis of these verbs is critical to allow detection software to identify them as proper paraphrasing and not – mistakenly and simplistically – as plagiarism.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, I present the studies on fabrication, spectral and polarisation characterisation of fibre gratings with tilted structures at 45º and > 45º (namely 45º- TFGs and ex 45º-TFGs throughout this thesis) and a range of novel applications with these two types of grating. One of the major contributions made in this thesis is the systematic investigation of the grating structures, inscription analysis and spectral and polarisation properties of both types of TFGs. I have inscribed 45º-TFGs in standard telecom and polarisation maintaining (PM) fibres. Two wavelength regions of interest have been explored including 1.55 µm and 1.06 µm. Detailed analysis on fabrication and characterisation of 45º-TFGs on PM fibres have also been carried out for the first time. For ex 45º- TFGs, fabrication has been investigated only on low-cost standard telecom fibre. Furthermore, thermal responses have been measured and analysed showing that both types of TFG have low responsivity to temperature change. More importantly, their refractive index (RI) responses have been characterised to verify the high responsivity to surrounding medium. Based on the unique polarisation properties, both types of TFG have been applied in fibre laser systems to improve the laser performance, which forms another major contribution of the research presented in this thesis. The integration of a 45º-TFG to the Erbium doped fibre laser (EDFL) enables single polarisation laser output at a single wavelength. When combing with ex 45º-TFGs, the EDFL can be transformed to a multi-wavelength switchable laser with single polarisation output. Furthermore, by utilising the polarisation property of the TFGs, a 45º-TFG based mode locked fibre laser is implemented. This laser can produce laser pulses at femtosecond scale and is the first application of TFG in the field of nonlinear optics. Another important contribution from the studies is the development of TFG based passive and active optical sensor systems. An ex 45º-TFG has been successfully developed into a liquid level sensor showing high sensitivity to water based solvents. Strain and twist sensors have been demonstrated via a fibre laser system using both 45°- and ex 45º-TFG with capability identifying not just the twist rate but also the direction. The sensor systems have shown the added advantage of low cost signal demodulation. In addition, load sensor applications have been demonstrated using the 45º-TFG based single polarisation EDFL and the experimental results show good agreement with the theoretical simulation.