834 resultados para Input-Output analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aerial Vehicles (UAV) has become a significant growing segment of the global aviation industry. These vehicles are developed with the intention of operating in regions where the presence of onboard human pilots is either too risky or unnecessary. Their popularity with both the military and civilian sectors have seen the use of UAVs in a diverse range of applications, from reconnaissance and surveillance tasks for the military, to civilian uses such as aid relief and monitoring tasks. Efficient energy utilisation on an UAV is essential to its functioning, often to achieve the operational goals of range, endurance and other specific mission requirements. Due to the limitations of the space available and the mass budget on the UAV, it is often a delicate balance between the onboard energy available (i.e. fuel) and achieving the operational goals. This paper presents the development of a parallel Hybrid Electric Propulsion System (HEPS) on a small fixed-wing UAV incorporating an Ideal Operating Line (IOL) control strategy. A simulation model of an UAV was developed in the MATLAB Simulink environment, utilising the AeroSim Blockset and the in-built Aerosonde UAV block and its parameters. An IOL analysis of an Aerosonde engine was performed, and the most efficient (i.e. provides greatest torque output at the least fuel consumption) points of operation for this engine were determined. Simulation models of the components in a HEPS were designed and constructed in the MATLAB Simulink environment. It was demonstrated through simulation that an UAV with the current HEPS configuration was capable of achieving a fuel saving of 6.5%, compared to the ICE-only configuration. These components form the basis for the development of a complete simulation model of a Hybrid-Electric UAV (HEUAV).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note examines the productive efficiency of 62 starting guards during the 2011/12 National Basketball Association (NBA) season. This period coincides with the phenomenal and largely unanticipated performance of New York Knicks’ starting point guard Jeremy Lin and the attendant public and media hype known as Linsanity. We employ a data envelopment analysis (DEA) approach that includes allowance for an undesirable output, here turnovers per game, with the desirable outputs of points, rebounds, assists, steals and blocks per game and an input of minutes per game. The results indicate that depending upon the specification, between 29% and 42% of NBA guards are fully efficient, including Jeremy Lin, with a mean inefficiency of 3.7% and 19.2%. However, while Jeremy Lin is technically efficient, he seldom serves as a benchmark for inefficient players, at least when compared with established players such as Chris Paul and Dwayne Wade. This suggests the uniqueness of Jeremy Lin's productive solution and may explain why his unique style of play, encompassing individual brilliance, unselfish play and team leadership, is of such broad public appeal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Operational modal analysis (OMA) is prevalent in modal identifi cation of civil structures. It asks for response measurements of the underlying structure under ambient loads. A valid OMA method requires the excitation be white noise in time and space. Although there are numerous applications of OMA in the literature, few have investigated the statistical distribution of a measurement and the infl uence of such randomness to modal identifi cation. This research has attempted modifi ed kurtosis to evaluate the statistical distribution of raw measurement data. In addition, a windowing strategy employing this index has been proposed to select quality datasets. In order to demonstrate how the data selection strategy works, the ambient vibration measurements of a laboratory bridge model and a real cable-stayed bridge have been respectively considered. The analysis incorporated with frequency domain decomposition (FDD) as the target OMA approach for modal identifi cation. The modal identifi cation results using the data segments with different randomness have been compared. The discrepancy in FDD spectra of the results indicates that, in order to fulfi l the assumption of an OMA method, special care shall be taken in processing a long vibration measurement data. The proposed data selection strategy is easy-to-apply and verifi ed effective in modal analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Police in-vehicle systems include a visual output mobile data terminal (MDT) with manual input via touch screen and keyboard. This study investigated the potential for voice-based input and output modalities for reducing subjective workload of police officers while driving. Nineteen experienced drivers of police vehicles (one female) from New South Wales (NSW) Police completed four simulated urban drives. Three drives included a concurrent secondary task: an imitation licence number search using an emulated MDT. Three different interface output-input modalities were examined: Visual-Manual, Visual-Voice, and Audio-Voice. Following each drive, participants rated their subjective workload using the NASA - Raw Task Load Index and completed questions on acceptability. A questionnaire on interface preferences was completed by participants at the end of their session. Engaging in secondary tasks while driving significantly increased subjective workload. The Visual-Manual interface resulted in higher time demand than either of the voice-based interfaces and greater physical demand than the Audio-Voice interface. The Visual-Voice and Audio-Voice interfaces were rated easier to use and more useful than the Visual-Manual interface, although not significantly different from each other. Findings largely echoed those deriving from the analysis of the objective driving performance data. It is acknowledged that under standard procedures, officers should not drive while performing tasks concurrently with certain invehicle policing systems; however, in practice this sometimes occurs. Taking action now to develop voice-based technology for police in-vehicle systems has potential to realise visions for potentially safer and more efficient vehicle-based police work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stations on Bus Rapid Transit (BRT) lines ordinarily control line capacity because they act as bottlenecks. At stations with passing lanes, congestion may occur when buses maneuvering into and out of the platform stopping lane interfere with bus flow, or when a queue of buses forms upstream of the station blocking inflow. We contend that, as bus inflow to the station area approaches capacity, queuing will become excessive in a manner similar to operation of a minor movement on an unsignalized intersection. This analogy is used to treat BRT station operation and to analyze the relationship between station queuing and capacity. In the first of three stages, we conducted microscopic simulation modeling to study and analyze operating characteristics of the station under near steady state conditions through output variables of capacity, degree of saturation and queuing. A mathematical model was then developed to estimate the relationship between average queue and degree of saturation and calibrated for a specified range of controlled scenarios of mean and coefficient of variation of dwell time. Finally, simulation results were calibrated and validated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important aspect of robotic path planning for is ensuring that the vehicle is in the best location to collect the data necessary for the problem at hand. Given that features of interest are dynamic and move with oceanic currents, vehicle speed is an important factor in any planning exercises to ensure vehicles are at the right place at the right time. Here, we examine different Gaussian process models to find a suitable predictive kinematic model that enable the speed of an underactuated, autonomous surface vehicle to be accurately predicted given a set of input environmental parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research examining effects of uncertainties of generic WSN platform and verifying the capability of SHM-oriented WSNs, particularly on demanding SHM applications like modal analysis and damage identification of real civil structures. This article first reviews the major technical uncertainties of both generic and SHM-oriented WSN platforms and efforts of SHM research community to cope with them. Then, effects of the most inherent WSN uncertainty on the first level of a common Output-only Modal-based Damage Identification (OMDI) approach are intensively investigated. Experimental accelerations collected by a wired sensory system on a benchmark civil structure are initially used as clean data before being contaminated with different levels of data pollutants to simulate practical uncertainties in both WSN platforms. Statistical analyses are comprehensively employed in order to uncover the distribution pattern of the uncertainty influence on the OMDI approach. The result of this research shows that uncertainties of generic WSNs can cause serious impact for level 1 OMDI methods utilizing mode shapes. It also proves that SHM-WSN can substantially lessen the impact and obtain truly structural information without having used costly computation solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A critical requirement for safe autonomous navigation of a planetary rover is the ability to accurately estimate the traversability of the terrain. This work considers the problem of predicting the attitude and configuration angles of the platform from terrain representations that are often incomplete due to occlusions and sensor limitations. Using Gaussian Processes (GP) and exteroceptive data as training input, we can provide a continuous and complete representation of terrain traversability, with uncertainty in the output estimates. In this paper, we propose a novel method that focuses on exploiting the explicit correlation in vehicle attitude and configuration during operation by learning a kernel function from vehicle experience to perform GP regression. We provide an extensive experimental validation of the proposed method on a planetary rover. We show significant improvement in the accuracy of our estimation compared with results obtained using standard kernels (Squared Exponential and Neural Network), and compared to traversability estimation made over terrain models built using state-of-the-art GP techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Diabetes in South Asia represents a different disease entity in terms of its onset, progression, and complications. In the present study, we systematically analyzed the medical research output on diabetes in South Asia. METHODS: The online SciVerse Scopus database was searched using the search terms "diabetes" and "diabetes mellitus" in the article Title, Abstract or Keywords fields, in conjunction with the names of each regional country in the Author Affiliation field. RESULTS: In total, 8478 research articles were identified. Most were from India (85.1%) and Pakistan (9.6%) and the contribution to the global diabetes research output was 2.1%. Publications from South Asia increased markedly after 2007, with 58.7% of papers published between 2000 and 2010 being published after 2007. Most papers were Research Articles (75.9%) and Reviews (12.9%), with only 90 (1.1%) clinical trials. Publications predominantly appeared in local national journals. Indian authors and institutions had the most number of articles and the highest h-index. There were 136 (1.6%) intraregional collaborative studies. Only 39 articles (0.46%) had >100 citations. CONCLUSIONS: Regional research output on diabetes mellitus is unsatisfactory, with only a minimal contribution to global diabetes research. Publications are not highly cited and only a few randomized controlled trials have been performed. In the coming decades, scientists in the region must collaborate and focus on practical and culturally acceptable interventional studies on diabetes mellitus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The acceptance of broadband ultrasound attenuation for the assessment of osteoporosis suffers from a limited understanding of ultrasound wave propagation through cancellous bone. It has recently been proposed that the ultrasound wave propagation can be described by a concept of parallel sonic rays. This concept approximates the detected transmission signal to be the superposition of all sonic rays that travel directly from transmitting to receiving transducer. The transit time of each ray is defined by the proportion of bone and marrow propagated. An ultrasound transit time spectrum describes the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit times over the surface of the receiving ultrasound transducer. The aim of this study was to provide a proof of concept that a transit time spectrum may be derived from digital deconvolution of input and output ultrasound signals. We have applied the active-set method deconvolution algorithm to determine the ultrasound transit time spectra in the three orthogonal directions of four cancellous bone replica samples and have compared experimental data with the prediction from the computer simulation. The agreement between experimental and predicted ultrasound transit time spectrum analyses derived from Bland–Altman analysis ranged from 92% to 99%, thereby supporting the concept of parallel sonic rays for ultrasound propagation in cancellous bone. In addition to further validation of the parallel sonic ray concept, this technique offers the opportunity to consider quantitative characterisation of the material and structural properties of cancellous bone, not previously available utilising ultrasound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present truncated differential analysis of reduced-round LBlock by computing the differential distribution of every nibble of the state. LLR statistical test is used as a tool to apply the distinguishing and key-recovery attacks. To build the distinguisher, all possible differences are traced through the cipher and the truncated differential probability distribution is determined for every output nibble. We concatenate additional rounds to the beginning and end of the truncated differential distribution to apply the key-recovery attack. By exploiting properties of the key schedule, we obtain a large overlap of key bits used in the beginning and final rounds. This allows us to significantly increase the differential probabilities and hence reduce the attack complexity. We validate the analysis by implementing the attack on LBlock reduced to 12 rounds. Finally, we apply single-key and related-key attacks on 18 and 21-round LBlock, respectively.