192 resultados para Fuzzy multiobjective linear programming
Resumo:
A major challenge in modern photonics and nano-optics is the diffraction limit of light which does not allow field localisation into regions with dimensions smaller than half the wavelength. Localisation of light into nanoscale regions (beyond its diffraction limit) has applications ranging from the design of optical sensors and measurement techniques with resolutions as high as a few nanometres, to the effective delivery of optical energy into targeted nanoscale regions such as quantum dots, nano-electronic and nano-optical devices. This field has become a major research direction over the last decade. The use of strongly localised surface plasmons in metallic nanostructures is one of the most promising approaches to overcome this problem. Therefore, the aim of this thesis is to investigate the linear and non-linear propagation of surface plasmons in metallic nanostructures. This thesis will focus on two main areas of plasmonic research –– plasmon nanofocusing and plasmon nanoguiding. Plasmon nanofocusing – The main aim of plasmon nanofocusing research is to focus plasmon energy into nanoscale regions using metallic nanostructures and at the same time achieve strong local field enhancement. Various structures for nanofocusing purposes have been proposed and analysed such as sharp metal wedges, tapered metal films on dielectric substrates, tapered metal rods, and dielectric V-grooves in metals. However, a number of important practical issues related to nanofocusing in these structures still remain unclear. Therefore, one of the main aims of this thesis is to address two of the most important of issues which are the coupling efficiency and heating effects of surface plasmons in metallic nanostructures. The method of analysis developed throughout this thesis is a general treatment that can be applied to a diversity of nanofocusing structures, with results shown here for the specific case of sharp metal wedges. Based on the geometrical optics approximation, it is demonstrated that the coupling efficiency from plasmons generated with a metal grating into the nanofocused symmetric or quasi-symmetric modes may vary between ~50% to ~100% depending on the structural parameters. Optimal conditions for nanofocusing with the view to minimise coupling and dissipative losses are also determined and discussed. It is shown that the temperature near the tip of a metal wedge heated by nanosecond plasmonic pulses can increase by several hundred degrees Celsius. This temperature increase is expected to lead to nonlinear effects, self-influence of the focused plasmon, and ultimately self-destruction of the metal tip. This thesis also investigates a different type of nanofocusing structure which consists of a tapered high-index dielectric layer resting on a metal surface. It is shown that the nanofocusing mechanism that occurs in this structure is somewhat different from other structures that have been considered thus far. For example, the surface plasmon experiences significant backreflection and mode transformation at a cut-off thickness. In addition, the reflected plasmon shows negative refraction properties that have not been observed in other nanofocusing structures considered to date. Plasmon nanoguiding – Guiding surface plasmons using metallic nanostructures is important for the development of highly integrated optical components and circuits which are expected to have a superior performance compared to their electronicbased counterparts. A number of different plasmonic waveguides have been considered over the last decade including the recently considered gap and trench plasmon waveguides. The gap and trench plasmon waveguides have proven to be difficult to fabricate. Therefore, this thesis will propose and analyse four different modified gap and trench plasmon waveguides that are expected to be easier to fabricate, and at the same time acquire improved propagation characteristics of the guided mode. In particular, it is demonstrated that the guided modes are significantly screened by the extended metal at the bottom of the structure. This is important for the design of highly integrated optics as it provides the opportunity to place two waveguides close together without significant cross-talk. This thesis also investigates the use of plasmonic nanowires to construct a Fabry-Pérot resonator/interferometer. It is shown that the resonance effect can be achieved with the appropriate resonator length and gap width. Typical quality factors of the Fabry- Pérot cavity are determined and explained in terms of radiative and dissipative losses. The possibility of using a nanowire resonator for the design of plasmonic filters with close to ~100% transmission is also demonstrated. It is expected that the results obtained in this thesis will play a vital role in the development of high resolution near field microscopy and spectroscopy, new measurement techniques and devices for single molecule detection, highly integrated optical devices, and nanobiotechnology devices for diagnostics of living cells.
Resumo:
This paper presents a new approach to the design of a rough fuzzy controller for the control loop of the SVC (static VAR system) in a two area power system for stability enhancement with particular emphasis on providing effective damping for oscillatory instabilities. The performances of the rough fuzzy and the conventional fuzzy controller are compared with that of the conventional PI controller for a variety of transient disturbances, highlighting the effectiveness of the rough fuzzy controller in damping the inter-area oscillations. The effect of the rough fuzzy controller in improving the CCT (critical clearing time) of the two area system is elaborated in this paper as well.
Resumo:
Numerous different and sometimes discrepant interests can be affected, both positively and negatively, throughout the course of a major infrastructure and construction (MIC) project. Failing to address and meet the concerns and expectations of the stakeholders involved has resulted in many project failures. One way to address this issue is through a participatory approach to project decision making. Whether the participation mechanism is effective or not depends largely on the client/owner. This paper provides a means of systematically evaluating the effectiveness of the public participation exercise, or even the whole project, through the measurement of stakeholder satisfaction. Since the process of satisfaction measurement is complicated and uncertain, requiring approximate reasoning involving human intuition, a fuzzy approach is adopted. From this, a multi-factor hierarchical fuzzy comprehensive evaluation model is established to facilitate the evaluation of satisfaction in both single stakeholder group and overall MIC project stakeholders.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays(FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.
Resumo:
The act of computer programming is generally considered to be temporally removed from a computer program's execution. In this paper we discuss the idea of programming as an activity that takes place within the temporal bounds of a real-time computational process and its interactions with the physical world. We ground these ideas within the con- text of livecoding -- a live audiovisual performance practice. We then describe how the development of the programming environment "Impromptu" has addressed our ideas of programming with time and the notion of the programmer as an agent in a cyber-physical system.
Resumo:
The act of computer programming is generally considered to be temporally removed from a computer program’s execution. In this paper we discuss the idea of programming as an activity that takes place within the temporal bounds of a real-time computational process and its interactions with the physical world. We ground these ideas within the context of livecoding – a live audiovisual performance practice. We then describe how the development of the programming environment “Impromptu” has addressed our ideas of programming with time and the notion of the programmer as an agent in a cyber-physical system.
Resumo:
It is acknowledged around the world that many university students struggle with learning to program (McCracken et al., 2001; McGettrick et al., 2005). In this paper, we describe how we have developed a research programme to systematically study and incrementally improve our teaching. We have adopted a research programme with three elements: (1) a theory that provides an organising framework for defining the type of phenomena and data of interest, (2) data on how the class as a whole performs on formative assessment tasks that are framed from within the organising framework, and (3) data from one-on-one think aloud sessions, to establish why students struggle with some of those in-class formative assessment tasks. We teach introductory computer programming, but this three-element structure of our research is applicable to many areas of engineering education research.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays (FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri-diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri-Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.
Resumo:
In 1991, McNabb introduced the concept of mean action time (MAT) as a finite measure of the time required for a diffusive process to effectively reach steady state. Although this concept was initially adopted by others within the Australian and New Zealand applied mathematics community, it appears to have had little use outside this region until very recently, when in 2010 Berezhkovskii and coworkers rediscovered the concept of MAT in their study of morphogen gradient formation. All previous work in this area has been limited to studying single–species differential equations, such as the linear advection–diffusion–reaction equation. Here we generalise the concept of MAT by showing how the theory can be applied to coupled linear processes. We begin by studying coupled ordinary differential equations and extend our approach to coupled partial differential equations. Our new results have broad applications including the analysis of models describing coupled chemical decay and cell differentiation processes, amongst others.
Resumo:
The Cross-Entropy (CE) is an efficient method for the estimation of rare-event probabilities and combinatorial optimization. This work presents a novel approach of the CE for optimization of a Soft-Computing controller. A Fuzzy controller was designed to command an unmanned aerial system (UAS) for avoiding collision task. The only sensor used to accomplish this task was a forward camera. The CE is used to reach a near-optimal controller by modifying the scaling factors of the controller inputs. The optimization was realized using the ROS-Gazebo simulation system. In order to evaluate the optimization a big amount of tests were carried out with a real quadcopter.
Resumo:
Linear adaptive channel equalization using the least mean square (LMS) algorithm and the recursive least-squares(RLS) algorithm for an innovative multi-user (MU) MIMOOFDM wireless broadband communications system is proposed. The proposed equalization method adaptively compensates the channel impairments caused by frequency selectivity in the propagation environment. Simulations for the proposed adaptive equalizer are conducted using a training sequence method to determine optimal performance through a comparative analysis. Results show an improvement of 0.15 in BER (at a SNR of 16 dB) when using Adaptive Equalization and RLS algorithm compared to the case in which no equalization is employed. In general, adaptive equalization using LMS and RLS algorithms showed to be significantly beneficial for MU-MIMO-OFDM systems.
Resumo:
Significant wheel-rail dynamic forces occur because of imperfections in the wheels and/or rail. One of the key responses to the transmission of these forces down through the track is impact force on the sleepers. Dynamic analysis of nonlinear systems is very complicated and does not lend itself easily to a classical solution of multiple equations. Trying to deduce the behaviour of track components from experimental data is very difficult because such data is hard to obtain and applies to only the particular conditions of the track being tested. The finite element method can be the best solution to this dilemma. This paper describes a finite element model using the software package ANSYS for various sized flat defects in the tread of a wheel rolling at a typical speed on heavy haul track. The paper explores the dynamic response of a prestressed concrete sleeper to these defects.
Resumo:
Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses.
Resumo:
Recent research has proposed Neo-Piagetian theory as a useful way of describing the cognitive development of novice programmers. Neo-Piagetian theory may also be a useful way to classify materials used in learning and assessment. If Neo-Piagetian coding of learning resources is to be useful then it is important that practitioners can learn it and apply it reliably. We describe the design of an interactive web-based tutorial for Neo-Piagetian categorization of assessment tasks. We also report an evaluation of the tutorial's effectiveness, in which twenty computer science educators participated. The average classification accuracy of the participants on each of the three Neo-Piagetian stages were 85%, 71% and 78%. Participants also rated their agreement with the expert classifications, and indicated high agreement (91%, 83% and 91% across the three Neo-Piagetian stages). Self-rated confidence in applying Neo-Piagetian theory to classifying programming questions before and after the tutorial were 29% and 75% respectively. Our key contribution is the demonstration of the feasibility of the Neo-Piagetian approach to classifying assessment materials, by demonstrating that it is learnable and can be applied reliably by a group of educators. Our tutorial is freely available as a community resource.