1000 resultados para Sensing Enterprise
Resumo:
In this paper, The radio Frequency (RF) Monitoring and Measurement of the Environmental Research Institute (ERI) located in Cork city will be monitored and analyzed in both the Zigbee (2.44 GHz) and the industrial, scientific and medical (ISM 433 MHz). The main objective of this survey is to confirm what the noise and interferences threat signals exist in these bands. It was agreed that the surveys would be carried out in 5 different rooms and areas that are candidates for the Wireless Sensors deployments. Based on the carried on study, A Zigbee standard Wireless Sensor Network (WSN) will be developed employing a number of motes for sensing number of signals like temperature, light and humidity beside the RSSI and battery voltage monitoring. Such system will be used later on to control and improve indoor building climate at reduced costs, remove the need for cabling and both installation and operational costs are significantly reduced.
Resumo:
Buildings consume 40% of Ireland's total annual energy translating to 3.5 billion (2004). The EPBD directive (effective January 2003) places an onus on all member states to rate the energy performance of all buildings in excess of 50m2. Energy and environmental performance management systems for residential buildings do not exist and consist of an ad-hoc integration of wired building management systems and Monitoring & Targeting systems for non-residential buildings. These systems are unsophisticated and do not easily lend themselves to cost effective retrofit or integration with other enterprise management systems. It is commonly agreed that a 15-40% reduction of building energy consumption is achievable by efficiently operating buildings when compared with typical practice. Existing research has identified that the level of information available to Building Managers with existing Building Management Systems and Environmental Monitoring Systems (BMS/EMS) is insufficient to perform the required performance based building assessment. The cost of installing additional sensors and meters is extremely high, primarily due to the estimated cost of wiring and the needed labour. From this perspective wireless sensor technology provides the capability to provide reliable sensor data at the required temporal and spatial granularity associated with building energy management. In this paper, a wireless sensor network mote hardware design and implementation is presented for a building energy management application. Appropriate sensors were selected and interfaced with the developed system based on user requirements to meet both the building monitoring and metering requirements. Beside the sensing capability, actuation and interfacing to external meters/sensors are provided to perform different management control and data recording tasks associated with minimisation of energy consumption in the built environment and the development of appropriate Building information models(BIM)to enable the design and development of energy efficient spaces.
Resumo:
Buried heat sources can be investigated by examining thermal infrared images and comparing these with the results of theoretical models which predict the thermal anomaly a given heat source may generate. Key factors influencing surface temperature include the geometry and temperature of the heat source, the surface meteorological environment, and the thermal conductivity and anisotropy of the rock. In general, a geothermal heat flux of greater than 2% of solar insolation is required to produce a detectable thermal anomaly in a thermal infrared image. A heat source of, for example, 2-300K greater than the average surface temperature must be a t depth shallower than 50m for the detection of the anomaly in a thermal infrared image, for typical terrestrial conditions. Atmospheric factors are of critical importance. While the mean atmospheric temperature has little significance, the convection is a dominant factor, and can act to swamp the thermal signature entirely. Given a steady state heat source that produces a detectable thermal anomaly, it is possible to loosely constrain the physical properties of the heat source and surrounding rock, using the surface thermal anomaly as a basis. The success of this technique is highly dependent on the degree to which the physical properties of the host rock are known. Important parameters include the surface thermal properties and thermal conductivity of the rock. Modelling of transient thermal situations was carried out, to assess the effect of time dependant thermal fluxes. One-dimensional finite element models can be readily and accurately applied to the investigation of diurnal heat flow, as with thermal inertia models. Diurnal thermal models of environments on Earth, the Moon and Mars were carried out using finite elements and found to be consistent with published measurements. The heat flow from an injection of hot lava into a near surface lava tube was considered. While this approach was useful for study, and long term monitoring in inhospitable areas, it was found to have little hazard warning utility, as the time taken for the thermal energy to propagate to the surface in dry rock (several months) in very long. The resolution of the thermal infrared imaging system is an important factor. Presently available satellite based systems such as Landsat (resolution of 120m) are inadequate for detailed study of geothermal anomalies. Airborne systems, such as TIMS (variable resolution of 3-6m) are much more useful for discriminating small buried heat sources. Planned improvements in the resolution of satellite based systems will broaden the potential for application of the techniques developed in this thesis. It is important to note, however, that adequate spatial resolution is a necessary but not sufficient condition for successful application of these techniques.
Resumo:
In this thesis, the evanescent field sensing techniques of tapered optical nanofibres and microspherical resonators are investigated. This includes evanescent field spectroscopy of a silica nanofibre in a rubidium vapour; thermo-optical tuning of Er:Yb co-doped phosphate glass microspheres; optomechanical properties of microspherical pendulums; and the fabrication and characterisation of borosilicate microbubble resonators. Doppler-broadened and sub-Doppler absorption spectroscopic techniques are performed around the D2 transition (780.24 nm) of rubidium using the evanescent field produced at the waist of a tapered nanofibre with input probe powers as low as 55 nW. Doppler-broadened Zeeman shifts and a preliminary dichroic atomic vapour laser lock (DAVLL) line shape are also observed via the nanofibre waist with an applied magnetic field of 60 G. This device has the potential for laser frequency stabilisation while also studying the effects of atom-surface interactions. A non-invasive thermo-optical tuning technique of Er:Yb co-doped microspheres to specific arbitrary wavelengths is demonstrated particularly to 1294 nm and the 5S1/2F=3 to 5P3/2Fʹ=4 laser cooling transition of 85Rb. Reversible tuning ranges of up to 474 GHz and on resonance cavity timescales on the order of 100 s are reported. This procedure has prospective applications for sensing a variety of atomic or molecular species in a cavity quantum electrodynamics (QED) experiments. The mechanical characteristics of a silica microsphere pendulum with a relatively low spring constant of 10-4 Nm-1 are explored. A novel method of frequency sweeping the motion of the pendulum to determine its natural resonance frequencies while overriding its sensitivity to environmental noise is proposed. An estimated force of 0.25 N is required to actuate the pendulum by a displacement of (1-2) μm. It is suggested that this is of sufficient magnitude to be experienced between two evanescently coupled microspheres (photonic molecule) and enable spatial trapping of the micropendulum. Finally, single-input borosilicate microbubble resonators with diameters <100 μm are fabricated using a CO2 laser. Optical whispering gallery mode spectra are observed via evanescent coupling with a tapered fibre. A red-shift of (4-22) GHz of the resonance modes is detected when the hollow cavity was filled with nano-filtered water. A polarisation conversion effect, with an efficiency of 10%, is observed when the diameter of the coupling tapered fibre waist is varied. This effect is also achieved by simply varying the polarisation of the input light in the tapered fibre where the efficiency is optimised to 92%. Thus, the microbubble device acts as a reversible band-pass to band-stop optical filter for cavity-QED, integrated solid-state and semiconductor circuit applications.
Resumo:
The desire to obtain competitive advantage is a motivator for implementing Enterprise Resource Planning (ERP) Systems (Adam & O’Doherty, 2000). However, while it is accepted that Information Technology (IT) in general may contribute to the improvement of organisational performance (Melville, Kraemer, & Gurbaxani, 2004), the nature and extent of that contribution is poorly understood (Jacobs & Bendoly, 2003; Ravichandran & Lertwongsatien, 2005). Accordingly, Henderson and Venkatraman (1993) assert that it is the application of business and IT capabilities to develop and leverage a firm’s IT resources for organisational transformation, rather than the acquired technological functionality, that secures competitive advantage for firms. Application of the Resource Based View of the firm (Wernerfelt, 1984) and Dynamic Capabilities Theory (DCT) (Teece and Pisano (1998) in particular) may yield insights into whether or not the use of Enterprise Systems enhances organisations’ core capabilities and thereby obtains competitive advantage, sustainable or otherwise (Melville et al., 2004). An operational definition of Core Capabilities that is independent of the construct of Sustained Competitive Advantage is formulated. This Study proposes and utilises an applied Dynamic Capabilities framework to facilitate the investigation of the role of Enterprise Systems. The objective of this research study is to investigate the role of Enterprise Systems in the Core Dynamic Capabilities of Asset Lifecycle Management. The Study explores the activities of Asset Lifecycle Management, the Core Dynamic Capabilities inherent in Asset Lifecycle Management and the footprint of Enterprise Systems on those Dynamic Capabilities. Additionally, the study explains the mechanisms by which Enterprise Systems sustain the Exploitability and the Renewability of those Core Dynamic Capabilities. The study finds that Enterprise Systems contribute directly to the Value, Exploitability and Renewability of Core Dynamic Capabilities and indirectly to their Inimitability and Non-substitutability. The study concludes by presenting an applied Dynamic Capabilities framework, which integrates Alter (1992)’s definition of Information Systems with Teece and Pisano (1998)’s model of Dynamic Capabilities to provide a robust diagnostic for determining the sustained value generating contributions of Enterprise Systems. These frameworks are used in the conclusions to frame the findings of the study. The conclusions go on to assert that these frameworks are free - standing and analytically generalisable, per Siggelkow (2007) and Yin (2003).
Resumo:
The analysis of energy detector systems is a well studied topic in the literature: numerous models have been derived describing the behaviour of single and multiple antenna architectures operating in a variety of radio environments. However, in many cases of interest, these models are not in a closed form and so their evaluation requires the use of numerical methods. In general, these are computationally expensive, which can cause difficulties in certain scenarios, such as in the optimisation of device parameters on low cost hardware. The problem becomes acute in situations where the signal to noise ratio is small and reliable detection is to be ensured or where the number of samples of the received signal is large. Furthermore, due to the analytic complexity of the models, further insight into the behaviour of various system parameters of interest is not readily apparent. In this thesis, an approximation based approach is taken towards the analysis of such systems. By focusing on the situations where exact analyses become complicated, and making a small number of astute simplifications to the underlying mathematical models, it is possible to derive novel, accurate and compact descriptions of system behaviour. Approximations are derived for the analysis of energy detectors with single and multiple antennae operating on additive white Gaussian noise (AWGN) and independent and identically distributed Rayleigh, Nakagami-m and Rice channels; in the multiple antenna case, approximations are derived for systems with maximal ratio combiner (MRC), equal gain combiner (EGC) and square law combiner (SLC) diversity. In each case, error bounds are derived describing the maximum error resulting from the use of the approximations. In addition, it is demonstrated that the derived approximations require fewer computations of simple functions than any of the exact models available in the literature. Consequently, the regions of applicability of the approximations directly complement the regions of applicability of the available exact models. Further novel approximations for other system parameters of interest, such as sample complexity, minimum detectable signal to noise ratio and diversity gain, are also derived. In the course of the analysis, a novel theorem describing the convergence of the chi square, noncentral chi square and gamma distributions towards the normal distribution is derived. The theorem describes a tight upper bound on the error resulting from the application of the central limit theorem to random variables of the aforementioned distributions and gives a much better description of the resulting error than existing Berry-Esseen type bounds. A second novel theorem, providing an upper bound on the maximum error resulting from the use of the central limit theorem to approximate the noncentral chi square distribution where the noncentrality parameter is a multiple of the number of degrees of freedom, is also derived.
Resumo:
This thesis investigates the optimisation of Coarse-Fine (CF) spectrum sensing architectures under a distribution of SNRs for Dynamic Spectrum Access (DSA). Three different detector architectures are investigated: the Coarse-Sorting Fine Detector (CSFD), the Coarse-Deciding Fine Detector (CDFD) and the Hybrid Coarse-Fine Detector (HCFD). To date, the majority of the work on coarse-fine spectrum sensing for cognitive radio has focused on a single value for the SNR. This approach overlooks the key advantage that CF sensing has to offer, namely that high powered signals can be easily detected without extra signal processing. By considering a range of SNR values, the detector can be optimised more effectively and greater performance gains realised. This work considers the optimisation of CF spectrum sensing schemes where the security and performance are treated separately. Instead of optimising system performance at a single, constant, low SNR value, the system instead is optimised for the average operating conditions. The security is still provided such that at the low SNR values the safety specifications are met. By decoupling the security and performance, the system’s average performance increases whilst maintaining the protection of licensed users from harmful interference. The different architectures considered in this thesis are investigated in theory, simulation and physical implementation to provide a complete overview of the performance of each system. This thesis provides a method for estimating SNR distributions which is quick, accurate and relatively low cost. The CSFD is modelled and the characteristic equations are found for the CDFD scheme. The HCFD is introduced and optimisation schemes for all three architectures are proposed. Finally, using the Implementing Radio In Software (IRIS) test-bed to confirm simulation results, CF spectrum sensing is shown to be significantly quicker than naive methods, whilst still meeting the required interference probability rates and not requiring substantial receiver complexity increases.
Resumo:
The work presented in this thesis described the development of low-cost sensing and separation devices with electrochemical detections for health applications. This research employs macro, micro and nano technology. The first sensing device developed was a tonerbased micro-device. The initial development of microfluidic devices was based on glass or quartz devices that are often expensive to fabricate; however, the introduction of new types of materials, such as plastics, offered a new way for fast prototyping and the development of disposable devices. One such microfluidic device is based on the lamination of laser-printed polyester films using a computer, printer and laminator. The resulting toner-based microchips demonstrated a potential viability for chemical assays, coupled with several detection methods, particularly Chip-Electrophoresis-Chemiluminescence (CE-CL) detection which has never been reported in the literature. Following on from the toner-based microchip, a three-electrode micro-configuration was developed on acetate substrate. This is the first time that a micro-electrode configuration made from gold; silver and platinum have been fabricated onto acetate by means of patterning and deposition techniques using the central fabrication facilities in Tyndall National Institute. These electrodes have been designed to facilitate the integration of a 3- electrode configuration as part of the fabrication process. Since the electrodes are on acetate the dicing step can automatically be eliminated. The stability of these sensors has been investigated using electrochemical techniques with excellent outcomes. Following on from the generalised testing of the electrodes these sensors were then coupled with capillary electrophoresis. The final sensing devices were on a macro scale and involved the modifications of screenprinted electrodes. Screen-printed electrodes (SPE) are generally seen to be far less sensitive than the more expensive electrodes including the gold, boron-doped diamond and glassy carbon electrodes. To enhance the sensitivity of these electrodes they were treated with metal nano-particles, gold and palladium. Following on from this, another modification was introduced. The carbonaceous material carbon monolith was drop-cast onto the SPE and then the metal nano-particles were electrodeposited onto the monolith material
Resumo:
Our research follows a design science approach to develop a method that supports the initialization of ES implementation projects – the chartering phase. This project phase is highly relevant for implementation success, but is understudied in IS research. In this paper, we derive design principles for a chartering method based on a systematic review of ES implementation literature and semi-structured expert interviews. Our analysis identifies differences in the importance of certain success factors depending on the system type. The proposed design principles are built on these factors and are linked to chartering key activities. We specifically consider system-type-specific chartering aspects for process-centric Business Intelligence & Analytics (BI&A) systems, which are an emerging class of systems at the intersection of BI&A and business process management. In summary, this paper proposes design principles for a chartering method – considering specifics of process-centric BI&A.
Resumo:
New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).
Resumo:
The authors explore nanoscale sensor processor (nSP) architectures. Their design includes a simple accumulator-based instruction-set architecture, sensors, limited memory, and instruction-fused sensing. Using nSP technology based on optical resonance energy transfer logic helps them decrease the design's size; their smallest design is about the size of the largest-known virus. © 2006 IEEE.
Resumo:
This study involves two aspects of our investigations of plasmonics-active systems: (i) theoretical and simulation studies and (ii) experimental fabrication of plasmonics-active nanostructures. Two types of nanostructures are selected as the model systems for their unique plasmonics properties: (1) nanoparticles and (2) nanowires on substrate. Special focus is devoted to regions where the electromagnetic field is strongly concentrated by the metallic nanostructures or between nanostructures. The theoretical investigations deal with dimers of nanoparticles and nanoshells using a semi-analytical method based on a multipole expansion (ME) and the finite-element method (FEM) in order to determine the electromagnetic enhancement, especially at the interface areas of two adjacent nanoparticles. The experimental study involves the design of plasmonics-active nanowire arrays on substrates that can provide efficient electromagnetic enhancement in regions around and between the nanostructures. Fabrication of these nanowire structures over large chip-scale areas (from a few millimeters to a few centimeters) as well as FDTD simulations to estimate the EM fields between the nanowires are described. The application of these nanowire chips using surface-enhanced Raman scattering (SERS) for detection of chemicals and labeled DNA molecules is described to illustrate the potential of the plasmonics chips for sensing.
Resumo:
Light is a universal signal perceived by organisms, including fungi, in which light regulates common and unique biological processes depending on the species. Previous research has established that conserved proteins, originally called White collar 1 and 2 from the ascomycete Neurospora crassa, regulate UV/blue light sensing. Homologous proteins function in distant relatives of N. crassa, including the basidiomycetes and zygomycetes, which diverged as long as a billion years ago. Here we conducted microarray experiments on the basidiomycete fungus Cryptococcus neoformans to identify light-regulated genes. Surprisingly, only a single gene was induced by light above the commonly used twofold threshold. This gene, HEM15, is predicted to encode a ferrochelatase that catalyses the final step in haem biosynthesis from highly photoreactive porphyrins. The C. neoformans gene complements a Saccharomyces cerevisiae hem15Delta strain and is essential for viability, and the Hem15 protein localizes to mitochondria, three lines of evidence that the gene encodes ferrochelatase. Regulation of HEM15 by light suggests a mechanism by which bwc1/bwc2 mutants are photosensitive and exhibit reduced virulence. We show that ferrochelatase is also light-regulated in a white collar-dependent fashion in N. crassa and the zygomycete Phycomyces blakesleeanus, indicating that ferrochelatase is an ancient target of photoregulation in the fungal kingdom.
Resumo:
Previous studies have shown that the isoplanatic distortion due to turbulence and the image of a remote object may be jointly estimated from the 4D mutual intensity across an aperture. This Letter shows that decompressive inference on a 2D slice of the 4D mutual intensity, as measured by a rotational shear interferometer, is sufficient for estimation of sparse objects imaged through turbulence. The 2D slice is processed using an iterative algorithm that alternates between estimating the sparse objects and estimating the turbulence-induced phase screen. This approach may enable new systems that infer object properties through turbulence without exhaustive sampling of coherence functions.
Resumo:
Hydrologic research is a very demanding application of fiber-optic distributed temperature sensing (DTS) in terms of precision, accuracy and calibration. The physics behind the most frequently used DTS instruments are considered as they apply to four calibration methods for single-ended DTS installations. The new methods presented are more accurate than the instrument-calibrated data, achieving accuracies on the order of tenths of a degree root mean square error (RMSE) and mean bias. Effects of localized non-uniformities that violate the assumptions of single-ended calibration data are explored and quantified. Experimental design considerations such as selection of integration times or selection of the length of the reference sections are discussed, and the impacts of these considerations on calibrated temperatures are explored in two case studies.