975 resultados para stars: fundamental parameters
Resumo:
The response of high-speed bridges at resonance, particularly under flexural vibrations, constitutes a subject of research for many scientists and engineers at the moment. The topic is of great interest because, as a matter of fact, such kind of behaviour is not unlikely to happen due to the elevated operating speeds of modern rains, which in many cases are equal to or even exceed 300 km/h ( [1,2]). The present paper addresses the subject of the evolution of the wheel-rail contact forces during resonance situations in simply supported bridges. Based on a dimensionless formulation of the equations of motion presented in [4], very similar to the one introduced by Klasztorny and Langer in [3], a parametric study is conducted and the contact forces in realistic situations analysed in detail. The effects of rail and wheel irregularities are not included in the model. The bridge is idealised as an Euler-Bernoulli beam, while the train is simulated by a system consisting of rigid bodies, springs and dampers. The situations such that a severe reduction of the contact force could take place are identified and compared with typical situations in actual bridges. To this end, the simply supported bridge is excited at resonace by means of a theoretical train consisting of 15 equidistant axles. The mechanical characteristics of all axles (unsprung mass, semi-sprung mass, and primary suspension system) are identical. This theoretical train permits the identification of the key parameters having an influence on the wheel-rail contact forces. In addition, a real case of a 17.5 m bridges traversed by the Eurostar train is analysed and checked against the theoretical results. The influence of three fundamental parameters is investigated in great detail: a) the ratio of the fundamental frequency of the bridge and natural frequency of the primary suspension of the vehicle; b) the ratio of the total mass of the bridge and the semi-sprung mass of the vehicle and c) the ratio between the length of the bridge and the characteristic distance between consecutive axles. The main conclusions derived from the investigation are: The wheel-rail contact forces undergo oscillations during the passage of the axles over the bridge. During resonance, these oscillations are more severe for the rear wheels than for the front ones. If denotes the span of a simply supported bridge, and the characteristic distance between consecutive groups of loads, the lower the value of , the greater the oscillations of the contact forces at resonance. For or greater, no likelihood of loss of wheel-rail contact has been detected. The ratio between the frequency of the primary suspension of the vehicle and the fundamental frequency of the bridge is denoted by (frequency ratio), and the ratio of the semi-sprung mass of the vehicle (mass of the bogie) and the total mass of the bridge is denoted by (mass ratio). For any given frequency ratio, the greater the mass ratio, the greater the oscillations of the contact forces at resonance. The oscillations of the contact forces at resonance, and therefore the likelihood of loss of wheel-rail contact, present a minimum for approximately between 0.5 and 1. For lower or higher values of the frequency ratio the oscillations of the contact forces increase. Neglecting the possible effects of torsional vibrations, the metal or composite bridges with a low linear mass have been found to be the ones where the contact forces may suffer the most severe oscillations. If single-track, simply supported, composite or metal bridges were used in high-speed lines, and damping ratios below 1% were expected, the minimum contact forces at resonance could drop to dangerous values. Nevertheless, this kind of structures is very unusual in modern high-speed railway lines.
Resumo:
This paper deals with the assessment of the contribution of the second flexural mode to the dynamic behaviour of simply supported railway bridges. Alluding to the works of other authors, it is suggested in some references that the dynamic behaviour of simply supported bridges could be adequately represented taking into account only the contribution of the fundamental flexural mode. On the other hand, the European Rail Research Institute (ERRI) proposes that the second mode should also be included whenever the associated natural frequency is lower than 30 Hz]. This investigation endeavours to clarify the question as much as possible by establishing whether the maximum response of the bridge, in terms of displacements, accelerations and bending moments, can be computed accurately not taking account of the contribution of the second mode. To this end, a dimensionless formulation of the equations of motion of a simply supported beam traversed by a series of equally spaced moving loads is presented. This formulation brings to light the fundamental parameters governing the behaviour of the beam: damping ratio, dimensionless speed $ \alpha$=VT/L, and L/d ratio (L stands for the span of the beam, V for the speed of the train, T represents the fundamental period of the bridge and d symbolises the distance between consecutive loads). Assuming a damping ratio equal to 1%, which is a usual value for prestressed high-speed bridges, a parametric analysis is conducted over realistic ranges of values of $ \alpha$ and L/d. The results can be extended to any simply supported bridge subjected to a train of equally spaced loads in virtue of the so-called Similarity Formulae. The validity of these formulae can be derived from the dimensionless formulation mentioned above. In the parametric analysis the maximum response of the bridge is obtained for one thousand values of speed that cover the range from the fourth resonance of the first mode to the first resonance of the second mode. The response at twenty-one different locations along the span of the beam is compared in order to decide if the maximum can be accurately computed with the sole contribution of the fundamental mode.
Resumo:
El presente proyecto de fin de carrera describe y analiza el estudio integral del efecto de las vibraciones producidas por voladuras superficiales realizadas en el proyecto del “Tercer Juego de Esclusas” ejecutado para la Expansión del Canal de Panamá. Se recopilan un total de 53 registros, data generada por el monitoreo de 7 sismógrafos en 10 voladuras de producción realizadas en el año 2010. El fenómeno vibratorio tiene dos parámetros fundamentales, la velocidad pico-partícula (PPV) y la frecuencia dominante, los cuales caracterizan cuan dañino puede ser éste frente a su influencia sobre las estructuras civiles; por ello, se pretende caracterizarlas y fundamentalmente predecirlas, lo que permitirá su debido control. En función a lo expuesto, el estudio realizado consta de dos partes, la primera describe el comportamiento del terreno mediante la estimación de la ley de atenuación de la velocidad pico-partícula a través del uso de la regresión lineal por mínimos cuadrados; la segunda detalla un procedimiento validable para la predicción de la frecuencia dominante y del pseudo-espectro de respuesta de velocidad (PVRS) basada en la teoría de Newmark & Hall. Se ha obtenido: (i) la ley de atenuación del terreno para distintos grados de fiabilidad, (ii) herramientas de diseño de voladuras basadas en la relación de carga – distancia, (iii) la demostración que los valores de PPV se ajustan a una distribución log-normal, (iv) el mapa de isolíneas de PPV para el área de estudio, (v) una técnica detallada y válida para la predicción de la frecuencia dominante y del espectro de respuesta, (vi) formulaciones matemáticas de los factores de amplificación para el desplazamiento, velocidad y aceleración, (vii) mapa de isolíneas de amplificación para el área de estudio. A partir de los resultados obtenidos se proporciona información útil para su uso en el diseño y control de las voladuras posteriores del proyecto. ABSTRACT This project work describes and analyzes the comprehensive study of the effect of the vibrations produced by surface blasting carried out in the "Third Set of Locks" project executed for the expansion of the Panama Canal. A total of 53 records were collected, with the data generated by the monitoring of 7 seismographs in 10 production blasts carried out in 2010. The vibratory phenomenon has two fundamental parameters, the peak-particle velocity (PPV) and the dominant frequency, which characterize how damaging this can be compared to their influence on structures, which is why this is intended to characterize and predict fundamentally, that which allows proper control. Based on the above, the study consists of two parts; the first describes the behavior of the terrain by estimating the attenuation law for peak-particle velocity by using the ordinary least squares regression analysis, the second details a validable procedure for the prediction of the dominant frequency and pseudo-velocity response spectrum (PVRS) based on the theory of Newmark & Hall. The following have been obtained: (i) the attenuation law of the terrain for different degrees of reliability, (ii) blast design tools based on charge-distance ratio, (iii) the demonstration that the values of PPV conform to a log-normal distribution, (iv) the map of isolines of PPV for the area of study (v) detailed and valid technique for predicting the dominant frequency response spectrum, (vi) mathematical formulations of the amplification factors for displacement, velocity and acceleration, (vii) amplification of isolines map for the study area. From the results obtained, the study provides useful information for use in the design and control of blasting for subsequent projects.
Resumo:
Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and valuations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. Introduction Nowadays, any engineering calculation performed in the nuclear field should be accompanied by an uncertainty analysis. In such an analysis, different sources of uncertainties are taken into account. Works such as those performed under the UAM project (Ivanov, et al., 2013) treat nuclear data as a source of uncertainty, in particular cross-section data for which uncertainties given in the form of covariance matrices are already provided in the major nuclear data libraries. Meanwhile, fission yield uncertainties were often neglected or treated shallowly, because their effects were considered of second order compared to cross-sections (Garcia-Herranz, et al., 2010). However, the Working Party on International Nuclear Data Evaluation Co-operation (WPEC)
Resumo:
El presente proyecto parte de un programa utilizado en las prácticas de laboratorio en la asignatura Antenas y Compatibilidad Electromagnética del sexto semestre llamado SABOR, que pretende ser actualizado para que en las nuevas versiones de los sistemas operativos ofrecidos por la compañía Windows pueda ser operativo. El objetivo principal será diseñar e implementar nuevas funcionalidades así como desarrollar mejoras y corregir errores del mismo. Para su mejor entendimiento se ha creado una herramienta en entorno MATLAB para analizar uno de los tipos más comunes de Apertura que se utilizan actualmente, las bocinas. Dicha herramienta es una interfaz gráfica que tiene como entradas las variables elementales de diseño de la apertura como por ejemplo: dimensiones de la propia bocina o los parámetros generales comunes a todas ellas. A su vez, el software nos genera algunos de los parámetros de salida fundamentales de las antenas: Directividad, Ancho de haz, Centro de fase y Spillover. Para el correcto desarrollo del software se ha realizado numerosas pruebas con el fin de depurar y corregir errores con respecto a la anterior versión del SABOR. Por otra parte se ha hecho también hincapié en la funcionalidad del programa para que sea más intuitivo y evitar complejidades. El tipo de antena que se pretende estudiar es la bocina que consiste en una guía de onda en la cual el área de la sección se va incrementando progresivamente hasta un extremo abierto, que se comporta como una apertura. Se utilizan extensamente en satélites comerciales para coberturas globales desde órbitas geoestacionarias, pero el uso más común es como elemento de radiación para reflectores de antenas. Los tipos de bocinas que se van a examinar en la herramienta son: Sectorial H, Sectorial E, Piramidal, Cónica, Cónica Corrugada y Piramidal Corrugada. El proyecto está desarrollado de manera que pueda servir de información teórico-práctico de todo el software SABOR. Por ello, el documento además de revisar la teoría de las bocinas analizadas, mostrará la información relacionada con la programación orientado a objetos en entorno MATLAB cuyo objetivo propio es adquirir una nueva forma de pensamiento acerca del proceso de descomposición de problemas y desarrollo de soluciones de programación. Finalmente se ha creado un manual de autoayuda para dar soporte al software y se han incluido los resultados de diversas pruebas realizadas para poder observar todos los detalles de su funcionamiento, así como las conclusiones y líneas futuras de acción. ABSTRACT This Project comes from a program used in the labs of the subject Antennas and Electromagnetic Compatibility in the sixth semester called SABOR, which aims to be updated in order to any type of computer running a Windows operating systems(Windows 7 and subsequent versions). The main objectives are design and improve existing functionalities and develop new features. In addition, we will correct mistakes in earlier versions. For a better understanding a new custom tool using MATLAB environment has been created to analyze one of the most common types of apertura antenna which is used for the moment, horns. This tool is a graphical interface that has elementary design variables as a inputs, for example: Dimensions of the own horn or common general parameters of all horns. At the same time, the software generate us some of the fundamental parameters of antennas output like Directivity, Beamwidth, Phase centre and Spillover. This software has been performed numerous tests for the proper functioning of the Software and we have been cared in order to debug and correct errors that were detected in earlier versions of SABOR. In addition, it has also been emphasized the program's functionality in order to be more intuitive and avoiding unnecessary barriers or complexities. The type of antenna that we are going to study is the horn which consists of a waveguides which the section area has been gradually increasing to an open-ended, that behaves as an aperture. It is widely used in comercial satellites for global coverage from geostationary orbits. However, the most common use is radiating element for antenna reflectors. The types of horns which is going to be considered are: Rectangular H-plane sectorial, Rectangular E-plane sectorial, Rectangular Pyramidal, Circular, Corrugated Circular and Corrugated Pyramidal. The Project is developed so that it can be used as practical-theorical information around the SABOR software. Therefore, In addition to thoroughly reviewing the theory document of analyzed horns, it display information related to the object-oriented programming in MATLAB environment whose goal leads us to a new way of thinking about the process of decomposition of problems and solutions development programming. Finally, it has been created a self-help manual in order to support the software and has been included the results of different tests to observe all the details of their operations, as well as the conclusions and future action lines.
Resumo:
A microtiter-based assay system is described in which DNA hairpin probes with dangling ends and single-stranded, linear DNA probes were immobilized and compared based on their ability to capture single-strand target DNA. Hairpin probes consisted of a 16 bp duplex stem, linked by a T2-biotin·dT-T2 loop. The third base was a biotinylated uracil (UB) necessary for coupling to avidin coated microtiter wells. The capture region of the hairpin was a 3′ dangling end composed of either 16 or 32 bases. Fundamental parameters of the system, such as probe density and avidin adsorption capacity of the plates were characterized. The target DNA consisted of 65 bases whose 3′ end was complementary to the dangling end of the hairpin or to the linear probe sequence. The assay system was employed to measure the time dependence and thermodynamic stability of target hybridization with hairpin and linear probes. Target molecules were labeled with either a 5′-FITC, or radiolabeled with [γ-33P]ATP and captured by either linear or hairpin probes affixed to the solid support. Over the range of target concentrations from 10 to 640 pmol hybridization rates increased with increasing target concentration, but varied for the different probes examined. Hairpin probes displayed higher rates of hybridization and larger equilibrium amounts of captured targets than linear probes. At 25 and 45°C, rates of hybridization were better than twice as great for the hairpin compared with the linear capture probes. Hairpin–target complexes were also more thermodynamically stable. Binding free energies were evaluated from the observed equilibrium constants for complex formation. Results showed the order of stability of the probes to be: hairpins with 32 base dangling ends > hairpin probes with l6 base dangling ends > 16 base linear probes > 32 base linear probes. The physical characteristics of hairpins could offer substantial advantages as nucleic acid capture moieties in solid support based hybridization systems.
Resumo:
The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble’s law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin.
Resumo:
Context. VISTA Variables in the Vía Láctea (VVV) is one of six ESO Public Surveys using the 4 meter Visible and Infrared Survey Telescope for Astronomy (VISTA). The VVV survey covers the Milky Way bulge and an adjacent section of the disk, and one of the principal objectives is to search for new star clusters within previously unreachable obscured parts of the Galaxy. Aims. The primary motivation behind this work is to discover and analyze obscured star clusters in the direction of the inner Galactic disk and bulge. Methods. Regions of the inner disk and bulge covered by the VVV survey were visually inspected using composite JHKS color images to select new cluster candidates on the basis of apparent overdensities. DR1, DR2, CASU, and point spread function photometry of 10 × 10 arcmin fields centered on each candidate cluster were used to construct color–magnitude and color–color diagrams. Follow-up spectroscopy of the brightest members of several cluster candidates was obtained in order to clarify their nature. Results. We report the discovery of 58 new infrared cluster candidates. Fundamental parameters such as age, distance, and metallicity were determined for 20 of the most populous clusters.
Resumo:
The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.
Resumo:
We discuss the construction of a photometric redshift catalogue of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS), emphasizing the principal steps necessary for constructing such a catalogue: (i) photometrically selecting the sample, (ii) measuring photometric redshifts and their error distributions, and (iii) estimating the true redshift distribution. We compare two photometric redshift algorithms for these data and find that they give comparable results. Calibrating against the SDSS and SDSS-2dF (Two Degree Field) spectroscopic surveys, we find that the photometric redshift accuracy is sigma similar to 0.03 for redshifts less than 0.55 and worsens at higher redshift (similar to 0.06 for z < 0.7). These errors are caused by photometric scatter, as well as systematic errors in the templates, filter curves and photometric zero-points. We also parametrize the photometric redshift error distribution with a sum of Gaussians and use this model to deconvolve the errors from the measured photometric redshift distribution to estimate the true redshift distribution. We pay special attention to the stability of this deconvolution, regularizing the method with a prior on the smoothness of the true redshift distribution. The methods that we develop are applicable to general photometric redshift surveys.
Resumo:
We present new measurements of the luminosity function (LF) of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS) and the 2dF SDSS LRG and Quasar (2SLAQ) survey. We have carefully quantified, and corrected for, uncertainties in the K and evolutionary corrections, differences in the colour selection methods, and the effects of photometric errors, thus ensuring we are studying the same galaxy population in both surveys. Using a limited subset of 6326 SDSS LRGs (with 0.17 < z < 0.24) and 1725 2SLAQ LRGs (with 0.5 < z < 0.6), for which the matching colour selection is most reliable, we find no evidence for any additional evolution in the LRG LF, over this redshift range, beyond that expected from a simple passive evolution model. This lack of additional evolution is quantified using the comoving luminosity density of SDSS and 2SLAQ LRGs, brighter than M-0.2r - 5 log h(0.7) = - 22.5, which are 2.51 +/- 0.03 x 10(-7) L circle dot Mpc(-3) and 2.44 +/- 0.15 x 10(-7) L circle dot Mpc(-3), respectively (< 10 per cent uncertainty). We compare our LFs to the COMBO-17 data and find excellent agreement over the same redshift range. Together, these surveys show no evidence for additional evolution (beyond passive) in the LF of LRGs brighter than M-0.2r - 5 log h(0.7) = - 21 ( or brighter than similar to L-*).. We test our SDSS and 2SLAQ LFs against a simple 'dry merger' model for the evolution of massive red galaxies and find that at least half of the LRGs at z similar or equal to 0.2 must already have been well assembled (with more than half their stellar mass) by z similar or equal to 0.6. This limit is barely consistent with recent results from semi-analytical models of galaxy evolution.
Resumo:
The Private Finance Initiative (PFI) has become one of the UK’s most contentious public policies. Despite New Labour’s advocacy of PFI as a means of achieving better value for money, criticisms of PFI have centred on key issues such as a lack of cost effectiveness, exaggerated pricing of risk transfers, excessive private sector profits, inflexibility and cumbersome administrative arrangements. Nevertheless, PFI has persisted as a key
infrastructure procurement method in the UK and has been supported as such by successive governments, as well as influencing policy in the Republic of Ireland and other European Nations. This paper explores this paradoxical outcome in relation to the role played in the UK by the National Audit Office (NAO). Under pressure to justify its support for PFI, the Blair government sought support for its policies by encouraging the NAO to investigate issues relating to PFI as well as specific PFI projects. It would have been expected that in fulfilling its role as independent auditor, the NAO would have examined whether PFI projects could have been delivered more efficiently, effectively or economically through other means. Yet, in line with earlier research, we find evidence that the NAO failed to comprehensively assess
key issues such as the value for money of PFI projects, and in so doing effectively acted as a legitimator of PFI policy. Using concepts relating to legitimacy theory and the idea of framing, our paper looks into 67 NAO private finance reports published between 1997 and 2011, with the goal of identifying the preferences, values and ideology underpinning the
NAO’s view on PFI during this period. Our analysis suggests that the NAO sought to legitimise existing PFI practices via a selective framing of problems and questions. Utilising a longitudinal approach, our analysis further suggests that this patterns of selective framing persisted over an extended time period during which fundamental parameters of the policy (such as contract length, to name one of the most important issues) were rarely addressed.
Overall the NAO’ supportive stance toward PFI seems to have relied on 1) a focused on positive aspects of PFI, such as on time delivery or lessons learned, and 2) positive comments on aspects of PFI that were criticised elsewhere, such as the lack of flexibility of underlying contractual arrangements. Our paper highlights the possibility that, rather than providing for a critical assessment of existing policies, national auditing bodies can
contribute to the creation of legitimatory environments. In terms of accounting research we would suggests that the objectivity and independence of accounting watchdogs should not be taken for granted, and that instead a critical investigation of the biases which can characterise these bodies can contribute to a deeper understanding of the nature of lobbying networks in the modern state.
Resumo:
Esta dissertação investiga a localização em espaços interiores através da comunicação por luz visível para robôs móveis, com base nos LEDs fixos nos edifícios, dando particular atenção à simulação e desenho do sensor, com vista ao desenvolvimento de um sensor de localização. Explica-se o crescimento da tecnologia LED e da constante necessidade de localização do homem em espaços interiores. Apresentado algumas características do LED e dos foto-detetores existentes. Com uma breve referencia a algumas das comunicações por luz visível de baixo débito possíveis de implementar. O desenvolvimento do protótipo do sensor inicia-se, principalmente, pela simulação de alguns dispositivos essenciais e das suas caraterísticas, como o emissor LED no controlo do ^angulo de meia potência (HPA) e a altura a que se encontra, e no recetor foto-díodo e a sua restrição de campo de visão (FOV). Simula-se o sensor pretendido com o número de foto-díodos necessários otimizando o espaço físico disponível e fazendo não só um refinamento no FOV mas também na distribuição espacial dos foto-díodos com funções predefinidas para a redução de incertezas de decisão de localização do robô. Estes resultados permitiram a construção física do sensor, desde o suporte para os foto-díodos, tendo em conta todas as medidas durante as simulações, e terminando com o desenvolvimento dos sensores e a sua integração completa. O tratamento de dados da leitura dos sinais recebidos do sensor são tratados por um microcontrolador, permitindo calcular parâmetros fundamentais no cálculo da posição. No final, os resultados teóricos bem como os práticos obtidos ao longo do desenvolvimento e possíveis propostas para trabalhos futuros que beneficiam desta investigação
Resumo:
In this work three different metallic metamaterials (MMs) structures such as asymmetric split ring resonators (A-SRRs), dipole and split H-shaped (ASHs) structures that support plasmonic resonances have been developed. The aim of the work involves the optimization of photonic sensor based on plasmonic resonances and surface enhanced infrared absorption (SEIRA) from the MM structures. The MMs structures were designed to tune their plasmonic resonance peaks in the mid-infrared region. The plasmonic resonance peaks produced are highly dependent on the structural dimension and polarisation of the electromagnetic (EM) source. The ASH structure particularly has the ability to produce the plasmonic resonance peak with dual polarisation of the EM source. The double resonance peaks produced due to the asymmetric nature of the structures were optimized by varying the fundamental parameters of the design. These peaks occur due to hybridization of the individual elements of the MMs structure. The presence of a dip known as a trapped mode in between the double plasmonic peaks helps to narrow the resonances. A periodicity greater than twice the length and diameter of the metallic structure was applied to produce narrow resonances for the designed MMs. A nanoscale gap in each structure that broadens the trapped mode to narrow the plasmonic resonances was also used. A thickness of 100 nm gold was used to experimentally produce a high quality factor of 18 in the mid-infrared region. The optimised plasmonic resonance peaks was used for detection of an analyte, 17β-estradiol. 17β-estradiol is mostly responsible for the development of human sex organs and can be found naturally in the environment through human excreta. SEIRA was the method applied to the analysis of the analyte. The work is important in the monitoring of human biology and in water treatment. Applying this method to the developed nano-engineered structures, enhancement factors of 10^5 and a sensitivity of 2791 nm/RIU was obtained. With this high sensitivity a figure of merit (FOM) of 9 was also achieved from the sensors. The experiments were verified using numerical simulations where the vibrational resonances of the C-H stretch from 17β-estradiol were modelled. Lastly, A-SRRs and ASH on waveguides were also designed and evaluated. These patterns are to be use as basis for future work.