33 resultados para Statistical modeling technique
Resumo:
The adulteration of extra virgin olive oil with other vegetable oils is a certain problem with economic and health consequences. Current official methods have been proved insufficient to detect such adulterations. One of the most concerning and undetectable adulterations with other vegetable oils is the addition of hazelnut oil. The main objective of this work was to develop a novel dimensionality reduction technique able to model oil mixtures as a part of an integrated pattern recognition solution. This final solution attempts to identify hazelnut oil adulterants in extra virgin olive oil at low percentages based on spectroscopic chemical fingerprints. The proposed Continuous Locality Preserving Projections (CLPP) technique allows the modelling of the continuous nature of the produced in house admixtures as data series instead of discrete points. This methodology has potential to be extended to other mixtures and adulterations of food products. The maintenance of the continuous structure of the data manifold lets the better visualization of this examined classification problem and facilitates a more accurate utilisation of the manifold for detecting the adulterants.
Resumo:
The paper is primarily concerned with the modelling of aircraft manufacturing cost. The aim is to establish an integrated life cycle balanced design process through a systems engineering approach to interdisciplinary analysis and control. The cost modelling is achieved using the genetic causal approach that enforces product family categorisation and the subsequent generation of causal relationships between deterministic cost components and their design source. This utilises causal parametric cost drivers and the definition of the physical architecture from the Work Breakdown Structure (WBS) to identify product families. The paper presents applications to the overall aircraft design with a particular focus on the fuselage as a subsystem of the aircraft, including fuselage panels and localised detail, as well as engine nacelles. The higher level application to aircraft requirements and functional analysis is investigated and verified relative to life cycle design issues for the relationship between acquisition cost and Direct Operational Cost (DOC), for a range of both metal and composite subsystems. Maintenance is considered in some detail as an important contributor to DOC and life cycle cost. The lower level application to aircraft physical architecture is investigated and verified for the WBS of an engine nacelle, including a sequential build stage investigation of the materials, fabrication and assembly costs. The studies are then extended by investigating the acquisition cost of aircraft fuselages, including the recurring unit cost and the non-recurring design cost of the airframe sub-system. The systems costing methodology is facilitated by the genetic causal cost modeling technique as the latter is highly generic, interdisciplinary, flexible, multilevel and recursive in nature, and can be applied at the various analysis levels required of systems engineering. Therefore, the main contribution of paper is a methodology for applying systems engineering costing, supported by the genetic causal cost modeling approach, whether at a requirements, functional or physical level.
Resumo:
This paper presents an efficient. modeling technique for the derivation of the dispersion characteristics of novel uniplanar metallodielectric periodic structures. The analysis is based on the method of moments and an interpolation scheme, which significantly accelerates the computations. Triangular basis functions are used that allow for modeling of arbitrary shaped metallic elements. Based on this method, novel uniplanar left-handed (LH) metamaterials are proposed. Variations of the split rectangular-loop element printed on grounded dielectric substrate are demonstrated to possess LH propagation properties. Full-wave dispersion curves are presented. Based on the dual transmission-line concept, we study the distribution of the modal fields And the variation of series capacitance and shunt inductance for all the proposed elements. A verification of the left-handedness is presented by means of full-wave simulation of finite uniplanar arrays using commercial software (HFSS). The cell dimensions are a small fraction of the wavelength (approximately lambda/24) so that the structures can he considered as a homogeneous effective medium. The structures are simple, readily scalable to higher frequencies, and compatible with low-cost fabrication techniques.
Resumo:
This paper describes a study that used video materials and visits to an airport to prepare children on the autism spectrum for travel by plane. Twenty parents and carers took part in the study with children aged from 3 to 16 years. The authors explain that the methods they used were based on Applied Behaviour Analysis (ABA) research; a video modeling technique called Point-Of-View Video-priming and during visits to an airport they used procedures known as Natural Environment Teaching. The findings suggest that using video and preparing children by taking them through what is likely to happen in the real environment when they travel by plane is effective and the authors suggest these strategies could be used to support children with autism with other experiences they need or would like to engage in such as visits to the dentist or hairdressers and access to leisure centres and other public spaces.
Resumo:
In many applications, and especially those where batch processes are involved, a target scalar output of interest is often dependent on one or more time series of data. With the exponential growth in data logging in modern industries such time series are increasingly available for statistical modeling in soft sensing applications. In order to exploit time series data for predictive modelling, it is necessary to summarise the information they contain as a set of features to use as model regressors. Typically this is done in an unsupervised fashion using simple techniques such as computing statistical moments, principal components or wavelet decompositions, often leading to significant information loss and hence suboptimal predictive models. In this paper, a functional learning paradigm is exploited in a supervised fashion to derive continuous, smooth estimates of time series data (yielding aggregated local information), while simultaneously estimating a continuous shape function yielding optimal predictions. The proposed Supervised Aggregative Feature Extraction (SAFE) methodology can be extended to support nonlinear predictive models by embedding the functional learning framework in a Reproducing Kernel Hilbert Spaces setting. SAFE has a number of attractive features including closed form solution and the ability to explicitly incorporate first and second order derivative information. Using simulation studies and a practical semiconductor manufacturing case study we highlight the strengths of the new methodology with respect to standard unsupervised feature extraction approaches.
Resumo:
The VLT-FLAMES Tarantula Survey (VFTS) has secured mid-resolution spectra of over 300 O-type stars in the 30 Doradus region of the Large Magellanic Cloud. A homogeneous analysis of such a large sample requires automated techniques, an approach that will also be needed for the upcoming analysis of the Gaia surveys of the Northern and Southern Hemisphere supplementing the Gaia measurements. We point out the importance of Gaia for the study of O stars, summarize the O star science case of VFTS and present a test of the automated modeling technique using synthetically generated data. This method employs a genetic algorithm based optimization technique in combination with fastwind model atmospheres. The method is found to be robust and able to recover the main photospheric parameters accurately. Precise wind parameters can be obtained as well, however, as expected, for dwarf stars the rate of acceleration of the ow is poorly constrained.
Resumo:
The work in this paper is of particular significance since it considers the problem of modelling cross- and auto-correlation in statistical process monitoring. The presence of both types of correlation can lead to fault insensitivity or false alarms, although in published literature to date, only autocorrelation has been broadly considered. The proposed method, which uses a Kalman innovation model, effectively removes both correlations. The paper (and Part 2 [2]) has emerged from work supported by EPSRC grant GR/S84354/01 and is of direct relevance to problems in several application areas including chemical, electrical, and mechanical process monitoring.
Resumo:
This paper presents research for developing a virtual inspection system that evaluates the dimensional tolerance of forged aerofoil blades formed using the finite element (FE) method. Conventional algorithms adopted by modern coordinate measurement processes have been incorporated with the latest free-form surface evaluation techniques to provide a robust framework for the dimensional inspection of FE aerofoil models. The accuracy of the approach had been verified with a strong correlation obtained between the virtual inspection data and coordinate measurement data from corresponding aerofoil components.
Resumo:
This letter reports the statistical characterization and modeling of the indoor radio channel for a mobile wireless personal area network operating at 868 MHz. Line of sight (LOS) and non-LOS conditions were considered for three environments: anechoic chamber, open office area and hallway. Overall, the Nakagami-m cdf best described fading for bodyworn operation in 60% of all measured channels in anechoic chamber and open office area environments. The Nakagami distribution was also found to provide a good description of Rician distributed channels which predominated in the hallway. Multipath played an important role in channel statistics with the mean recorded m value being reduced from 7.8 in the anechoic chamber to 1.3 in both the open office area and hallway.
Resumo:
This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.