9 resultados para Speed-accuracy tradeoff

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To measure the step-count accuracy of an ankle-worn accelerometer, a thigh-worn accelerometer and one pedometer in older and frail inpatients. Design: Cross-sectional design study. Setting: Research room within a hospital. Participants: Convenience sample of inpatients aged ≥65 years, able to walk 20 metres unassisted, with or without a walking-aid. Intervention: Patients completed a 40-minute programme of predetermined tasks while wearing the three motion sensors simultaneously. Video-recording of the procedure provided the criterion measurement of step-count. Main Outcome Measures: Mean percentage (%) errors were calculated for all tasks, slow versus fast walkers, independent versus walking-aid-users, and over shorter versus longer distances. The Intra-class Correlation was calculated and accuracy was visually displayed by Bland-Altman plots. Results: Thirty-two patients (78.1 ±7.8 years) completed the study. Fifteen were female and 17 used walking-aids. Their median speed was 0.46 m/sec (interquartile range, IQR 0.36-0.66). The ankle-worn accelerometer overestimated steps (median 1% error, IQR -3 to 13). The other motion sensors underestimated steps (40% error (IQR -51 to -35) and 38% (IQR -93 to -27), respectively). The ankle-worn accelerometer proved more accurate over longer distances (3% error, IQR 0 to 9), than shorter distances (10%, IQR -23 to 9). Conclusions: The ankle-worn accelerometer gave the most accurate step-count measurement and was most accurate over longer distances. Neither of the other motion sensors had acceptable margins of error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wind energy is the energy source that contributes most to the renewable energy mix of European countries. While there are good wind resources throughout Europe, the intermittency of the wind represents a major problem for the deployment of wind energy into the electricity networks. To ensure grid security a Transmission System Operator needs today for each kilowatt of wind energy either an equal amount of spinning reserve or a forecasting system that can predict the amount of energy that will be produced from wind over a period of 1 to 48 hours. In the range from 5m/s to 15m/s a wind turbine’s production increases with a power of three. For this reason, a Transmission System Operator requires an accuracy for wind speed forecasts of 1m/s in this wind speed range. Forecasting wind energy with a numerical weather prediction model in this context builds the background of this work. The author’s goal was to present a pragmatic solution to this specific problem in the ”real world”. This work therefore has to be seen in a technical context and hence does not provide nor intends to provide a general overview of the benefits and drawbacks of wind energy as a renewable energy source. In the first part of this work the accuracy requirements of the energy sector for wind speed predictions from numerical weather prediction models are described and analysed. A unique set of numerical experiments has been carried out in collaboration with the Danish Meteorological Institute to investigate the forecast quality of an operational numerical weather prediction model for this purpose. The results of this investigation revealed that the accuracy requirements for wind speed and wind power forecasts from today’s numerical weather prediction models can only be met at certain times. This means that the uncertainty of the forecast quality becomes a parameter that is as important as the wind speed and wind power itself. To quantify the uncertainty of a forecast valid for tomorrow requires an ensemble of forecasts. In the second part of this work such an ensemble of forecasts was designed and verified for its ability to quantify the forecast error. This was accomplished by correlating the measured error and the forecasted uncertainty on area integrated wind speed and wind power in Denmark and Ireland. A correlation of 93% was achieved in these areas. This method cannot solve the accuracy requirements of the energy sector. By knowing the uncertainty of the forecasts, the focus can however be put on the accuracy requirements at times when it is possible to accurately predict the weather. Thus, this result presents a major step forward in making wind energy a compatible energy source in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel miniaturised system for measurement of the in-flight characteristics of an arrow is introduced in this paper. The system allows the user to measure in-flight parameters such as the arrow’s speed, kinetic energy and momentum, arrow drag and vibrations of the arrow shaft. The system consists of electronics, namely a three axis accelerometer, shock switch, microcontroller and EEPROM memory embedded in the arrow tip. The system also includes a docking station for download and processing of in-flight ballistic data from the tip to provide the measured values. With this system, a user can evaluate and optimize their archery equipment setup based on measured ballistic values. Recent test results taken at NIST show the accuracy of the launch velocities to be within +/- 0.59%, when compared with NIST’s most accurate ballistic chronograph.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evaluation of temperature distribution in cold rooms is an important consideration in the design of food storage solutions. Two common approaches used in both industry and academia to address this question are the deployment of wireless sensors, and modelling with Computational Fluid Dynamics (CFD). However, for a realworld evaluation of temperature distribution in a cold room, both approaches have their limitations. For wireless sensors, it is economically unfeasible to carry out large-scale deployment (to obtain a high resolution of temperature distribution); while with CFD modelling, it is usually not accurate enough to get a reliable result. In this paper, we propose a model-based framework which combines the wireless sensors technique with CFD modelling technique together to achieve a satisfactory trade-off between minimum number of wireless sensors and the accuracy of temperature profile in cold rooms. A case study is presented to demonstrate the usability of the framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electronic signal processing systems currently employed at core internet routers require huge amounts of power to operate and they may be unable to continue to satisfy consumer demand for more bandwidth without an inordinate increase in cost, size and/or energy consumption. Optical signal processing techniques may be deployed in next-generation optical networks for simple tasks such as wavelength conversion, demultiplexing and format conversion at high speed (≥100Gb.s-1) to alleviate the pressure on existing core router infrastructure. To implement optical signal processing functionalities, it is necessary to exploit the nonlinear optical properties of suitable materials such as III-V semiconductor compounds, silicon, periodically-poled lithium niobate (PPLN), highly nonlinear fibre (HNLF) or chalcogenide glasses. However, nonlinear optical (NLO) components such as semiconductor optical amplifiers (SOAs), electroabsorption modulators (EAMs) and silicon nanowires are the most promising candidates as all-optical switching elements vis-à-vis ease of integration, device footprint and energy consumption. This PhD thesis presents the amplitude and phase dynamics in a range of device configurations containing SOAs, EAMs and/or silicon nanowires to support the design of all optical switching elements for deployment in next-generation optical networks. Time-resolved pump-probe spectroscopy using pulses with a pulse width of 3ps from mode-locked laser sources was utilized to accurately measure the carrier dynamics in the device(s) under test. The research work into four main topics: (a) a long SOA, (b) the concatenated SOA-EAMSOA (CSES) configuration, (c) silicon nanowires embedded in SU8 polymer and (d) a custom epitaxy design EAM with fast carrier sweepout dynamics. The principal aim was to identify the optimum operation conditions for each of these NLO device configurations to enhance their switching capability and to assess their potential for various optical signal processing functionalities. All of the NLO device configurations investigated in this thesis are compact and suitable for monolithic and/or hybrid integration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A wireless sensor network can become partitioned due to node failure, requiring the deployment of additional relay nodes in order to restore network connectivity. This introduces an optimisation problem involving a tradeoff between the number of additional nodes that are required and the costs of moving through the sensor field for the purpose of node placement. This tradeoff is application-dependent, influenced for example by the relative urgency of network restoration. In addition, minimising the number of relay nodes might lead to long routing paths to the sink, which may cause problems of data latency. This data latency is extremely important in wireless sensor network applications such as battlefield surveillance, intrusion detection, disaster rescue, highway traffic coordination, etc. where they must not violate the real-time constraints. Therefore, we also consider the problem of deploying multiple sinks in order to improve the network performance. Previous research has only parts of this problem in isolation, and has not properly considered the problems of moving through a constrained environment or discovering changes to that environment during the repair or network quality after the restoration. In this thesis, we firstly consider a base problem in which we assume the exploration tasks have already been completed, and so our aim is to optimise our use of resources in the static fully observed problem. In the real world, we would not know the radio and physical environments after damage, and this creates a dynamic problem where damage must be discovered. Therefore, we extend to the dynamic problem in which the network repair problem considers both exploration and restoration. We then add a hop-count constraint for network quality in which the desired locations can talk to a sink within a hop count limit after the network is restored. For each new problem of the network repair, we have proposed different solutions (heuristics and/or complete algorithms) which prioritise different objectives. We evaluate our solutions based on simulation, assessing the quality of solutions (node cost, movement cost, computation time, and total restoration time) by varying the problem types and the capability of the agent that makes the repair. We show that the relative importance of the objectives influences the choice of algorithm, and different speeds of movement for the repairing agent have a significant impact on performance, and must be taken into account when selecting the algorithm. In particular, the node-based approaches are the best in the node cost, and the path-based approaches are the best in the mobility cost. For the total restoration time, the node-based approaches are the best with a fast moving agent while the path-based approaches are the best with a slow moving agent. For a medium speed moving agent, the total restoration time of the node-based approaches and that of the path-based approaches are almost balanced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High volumes of data traffic along with bandwidth hungry applications, such as cloud computing and video on demand, is driving the core optical communication links closer and closer to their maximum capacity. The research community has clearly identifying the coming approach of the nonlinear Shannon limit for standard single mode fibre [1,2]. It is in this context that the work on modulation formats, contained in Chapter 3 of this thesis, was undertaken. The work investigates the proposed energy-efficient four-dimensional modulation formats. The work begins by studying a new visualisation technique for four dimensional modulation formats, akin to constellation diagrams. The work then carries out one of the first implementations of one such modulation format, polarisation-switched quadrature phase-shift keying (PS-QPSK). This thesis also studies two potential next-generation fibres, few-mode and hollow-core photonic band-gap fibre. Chapter 4 studies ways to experimentally quantify the nonlinearities in few-mode fibre and assess the potential benefits and limitations of such fibres. It carries out detailed experiments to measure the effects of stimulated Brillouin scattering, self-phase modulation and four-wave mixing and compares the results to numerical models, along with capacity limit calculations. Chapter 5 investigates hollow-core photonic band-gap fibre, where such fibres are predicted to have a low-loss minima at a wavelength of 2μm. To benefit from this potential low loss window requires the development of telecoms grade subsystems and components. The chapter will outline some of the development and characterisation of these components. The world's first wavelength division multiplexed (WDM) subsystem directly implemented at 2μm is presented along with WDM transmission over hollow-core photonic band-gap fibre at 2μm. References: [1]P. P. Mitra, J. B. Stark, Nature, 411, 1027-1030, 2001 [2] A. D. Ellis et al., JLT, 28, 423-433, 2010.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This longitudinal study tracked third-level French (n=10) and Chinese (n=7) learners of English as a second language (L2) during an eight-month study abroad (SA) period at an Irish university. The investigation sought to determine whether there was a significant relationship between length of stay (LoS) abroad and gains in the learners' oral complexity, accuracy and fluency (CAF), what the relationship was between these three language constructs and whether the two learner groups would experience similar paths to development. Additionally, the study also investigated whether specific reported out-of-class contact with the L2 was implicated in oral CAF gains. Oral data were collected at three equidistant time points; at the beginning of SA (T1), midway through the SA sojourn (T2) and at the end (T3), allowing for a comparison of CAF gains arising during one semester abroad to those arising during a subsequent semester. Data were collected using Sociolinguistic Interviews (Labov, 1984) and adapted versions of the Language Contact Profile (Freed et al., 2004). Overall, the results point to LoS abroad as a highly influential variable in gains to be expected in oral CAF during SA. While one semester in the TL country was not enough to foster statistically significant improvement in any of the CAF measures employed, significant improvement was found during the second semester of SA. Significant differences were also revealed between the two learner groups. Finally, significant correlations, some positive, some negative, were found between gains in CAF and specific usage of the L2. All in all, the disaggregation of the group data clearly illustrates, in line with other recent enquiries (e.g. Wright and Cong, 2014) that each individual learner's path to CAF development was unique and highly individualised, thus providing strong evidence for the recent claim that SLA is "an individualized nonlinear endeavor" (Polat and Kim, 2014: 186).