909 resultados para Fourier Spectral Method
Resumo:
This paper reports the feasibility and methodological considerations of using the Short Message System Experience Sampling (SMS-ES) Method, which is an experience sampling research method developed to assist researchers to collect repeat measures of consumers’ affective experiences. The method combines SMS with web-based technology in a simple yet effective way. It is described using a practical implementation study that collected consumers’ emotions in response to using mobile phones in everyday situations. The method is further evaluated in terms of the quality of data collected in the study, as well as against the methodological considerations for experience sampling studies. These two evaluations suggest that the SMS-ES Method is both a valid and reliable approach for collecting consumers’ affective experiences. Moreover, the method can be applied across a range of for-profit and not-for-profit contexts where researchers want to capture repeated measures of consumers’ affective experiences occurring over a period of time. The benefits of the method are discussed to assist researchers who wish to apply the SMS-ES Method in their own research designs.
Resumo:
The stochastic simulation algorithm was introduced by Gillespie and in a different form by Kurtz. There have been many attempts at accelerating the algorithm without deviating from the behavior of the simulated system. The crux of the explicit τ-leaping procedure is the use of Poisson random variables to approximate the number of occurrences of each type of reaction event during a carefully selected time period, τ. This method is acceptable providing the leap condition, that no propensity function changes “significantly” during any time-step, is met. Using this method there is a possibility that species numbers can, artificially, become negative. Several recent papers have demonstrated methods that avoid this situation. One such method classifies, as critical, those reactions in danger of sending species populations negative. At most, one of these critical reactions is allowed to occur in the next time-step. We argue that the criticality of a reactant species and its dependent reaction channels should be related to the probability of the species number becoming negative. This way only reactions that, if fired, produce a high probability of driving a reactant population negative are labeled critical. The number of firings of more reaction channels can be approximated using Poisson random variables thus speeding up the simulation while maintaining the accuracy. In implementing this revised method of criticality selection we make use of the probability distribution from which the random variable describing the change in species number is drawn. We give several numerical examples to demonstrate the effectiveness of our new method.
Resumo:
We consider a stochastic regularization method for solving the backward Cauchy problem in Banach spaces. An order of convergence is obtained on sourcewise representative elements.
Resumo:
Current knowledge about the relationship between transport disadvantage and activity space size is limited to urban areas, and as a result, very little is known about this link in a rural context. In addition, although research has identified transport disadvantaged groups based on their size of activity space, these studies have, however, not empirically explained such differences and the result is often a poor identification of the problems facing disadvantaged groups. Research has shown that transport disadvantage varies over time. The static nature of analysis using the activity space concept in previous research studies has lacked the ability to identify transport disadvantage in time. Activity space is a dynamic concept; and therefore possesses a great potential in capturing temporal variations in behaviour and access opportunities. This research derives measures of the size and fullness of activity spaces for 157 individuals for weekdays, weekends, and for a week using weekly activity-travel diary data from three case study areas located in rural Northern Ireland. Four focus groups were also conducted in order to triangulate quantitative findings and to explain the differences between different socio-spatial groups. The findings of this research show that despite having a smaller sized activity space, individuals were not disadvantaged because they were able to access their required activities locally. Car-ownership was found to be an important life line in rural areas. Temporal disaggregation of the data reveals that this is true only on weekends due to a lack of public transport services. In addition, despite activity spaces being at a similar size, the fullness of activity spaces of low-income individuals was found to be significantly lower compared to their high-income counterparts. Focus group data shows that financial constraint, poor connections both between public transport services and between transport routes and opportunities forced individuals to participate in activities located along the main transport corridors.
Resumo:
A new approach to pattern recognition using invariant parameters based on higher order spectra is presented. In particular, invariant parameters derived from the bispectrum are used to classify one-dimensional shapes. The bispectrum, which is translation invariant, is integrated along straight lines passing through the origin in bifrequency space. The phase of the integrated bispectrum is shown to be scale and amplification invariant, as well. A minimal set of these invariants is selected as the feature vector for pattern classification, and a minimum distance classifier using a statistical distance measure is used to classify test patterns. The classification technique is shown to distinguish two similar, but different bolts given their one-dimensional profiles. Pattern recognition using higher order spectral invariants is fast, suited for parallel implementation, and has high immunity to additive Gaussian noise. Simulation results show very high classification accuracy, even for low signal-to-noise ratios.
Resumo:
Higher-order spectral analysis is used to detect the presence of secondary and tertiary forced waves associated with the nonlinearity of energetic swell observed in 8- and 13-m water depths. Higher-order spectral analysis techniques are first described and then applied to the field data, followed by a summary of the results.
Resumo:
Higher-order spectral (bispectral and trispectral) analyses of numerical solutions of the Duffing equation with a cubic stiffness are used to isolate the coupling between the triads and quartets, respectively, of nonlinearly interacting Fourier components of the system. The Duffing oscillator follows a period-doubling intermittency catastrophic route to chaos. For period-doubled limit cycles, higher-order spectra indicate that both quadratic and cubic nonlinear interactions are important to the dynamics. However, when the Duffing oscillator becomes chaotic, global behavior of the cubic nonlinearity becomes dominant and quadratic nonlinear interactions are weak, while cubic interactions remain strong. As the nonlinearity of the system is increased, the number of excited Fourier components increases, eventually leading to broad-band power spectra for chaos. The corresponding higher-order spectra indicate that although some individual nonlinear interactions weaken as nonlinearity increases, the number of nonlinearly interacting Fourier modes increases. Trispectra indicate that the cubic interactions gradually evolve from encompassing a few quartets of Fourier components for period-1 motion to encompassing many quartets for chaos. For chaos, all the components within the energetic part of the power spectrum are cubically (but not quadratically) coupled to each other.
Resumo:
The phase of an analytic signal constructed from the autocorrelation function of a signal contains significant information about the shape of the signal. Using Bedrosian's (1963) theorem for the Hilbert transform it is proved that this phase is robust to multiplicative noise if the signal is baseband and the spectra of the signal and the noise do not overlap. Higher-order spectral features are interpreted in this context and shown to extract nonlinear phase information while retaining robustness. The significance of the result is that prior knowledge of the spectra is not required.
Resumo:
The use of adaptive wing/aerofoil designs is being considered as promising techniques in aeronautic/aerospace since they can reduce aircraft emissions, improve aerodynamic performance of manned or unmanned aircraft. The paper investigates the robust design and optimisation for one type of adaptive techniques; Active Flow Control (AFC) bump at transonic flow conditions on a Natural Laminar Flow (NLF) aerofoil designed to increase aerodynamic efficiency (especially high lift to drag ratio). The concept of using Shock Control Bump (SCB) is to control supersonic flow on the suction/pressure side of NLF aerofoil: RAE 5243 that leads to delaying shock occurrence or weakening its strength. Such AFC technique reduces total drag at transonic speeds due to reduction of wave drag. The location of Boundary Layer Transition (BLT) can influence the position the supersonic shock occurrence. The BLT position is an uncertainty in aerodynamic design due to the many factors, such as surface contamination or surface erosion. The paper studies the SCB shape design optimisation using robust Evolutionary Algorithms (EAs) with uncertainty in BLT positions. The optimisation method is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. Two test cases are conducted; the first test assumes the BLT is at 45% of chord from the leading edge and the second test considers robust design optimisation for SCB at the variability of BLT positions and lift coefficient. Numerical result shows that the optimisation method coupled to uncertainty design techniques produces Pareto optimal SCB shapes which have low sensitivity and high aerodynamic performance while having significant total drag reduction.
Resumo:
The World Health Organization recommends that data on mortality in its member countries are collected utilising the Medical Certificate of Cause of Death published in the instruction volume of the ICD-10. However, investment in health information processes necessary to promote the use of this certificate and improve mortality information is lacking in many countries. An appeal for support to make improvements has been launched through the Health Metrics Network’s MOVE-IT strategy (Monitoring of Vital Events – Information Technology) [World Health Organization, 2011]. Despite this international spotlight on the need for capture of mortality data and in the use of the ICD-10 to code the data reported on such certificates, there is little cohesion in the way that certifiers of deaths receive instruction in how to complete the death certificate, which is the main source document for mortality statistics. Complete and accurate documentation of the immediate, underlying and contributory causes of death of the decedent on the death certificate is a requirement to produce standardised statistical information and to the ability to produce cause-specific mortality statistics that can be compared between populations and across time. This paper reports on a research project conducted to determine the efficacy and accessibility of the certification module of the WHO’s newly-developed web based training tool for coders and certifiers of deaths. Involving a population of medical students from the Fiji School of Medicine and a pre and post research design, the study entailed completion of death certificates based on vignettes before and after access to the training tool. The ability of the participants to complete the death certificates and analysis of the completeness and specificity of the ICD-10 coding of the reported causes of death were used to measure the effect of the students’ learning from the training tool. The quality of death certificate completion was assessed using a Quality Index before and after the participants accessed the training tool. In addition, the views of the participants about accessibility and use of the training tool were elicited using a supplementary questionnaire. The results of the study demonstrated improvement in the ability of the participants to complete death certificates completely and accurately according to best practice. The training tool was viewed very positively and its implementation in the curriculum for medical students was encouraged. Participants also recommended that interactive discussions to examine the certification exercises would be an advantage.
Resumo:
In this paper a new graph-theory and improved genetic algorithm based practical method is employed to solve the optimal sectionalizer switch placement problem. The proposed method determines the best locations of sectionalizer switching devices in distribution networks considering the effects of presence of distributed generation (DG) in fitness functions and other optimization constraints, providing the maximum number of costumers to be supplied by distributed generation sources in islanded distribution systems after possible faults. The proposed method is simulated and tested on several distribution test systems in both cases of with DG and non DG situations. The results of the simulations validate the proposed method for switch placement of the distribution network in the presence of distributed generation.
Resumo:
Recently, because of the new developments in sustainable engineering and renewable energy, which are usually governed by a series of fractional partial differential equations (FPDEs), the numerical modelling and simulation for fractional calculus are attracting more and more attention from researchers. The current dominant numerical method for modeling FPDE is Finite Difference Method (FDM), which is based on a pre-defined grid leading to inherited issues or shortcomings including difficulty in simulation of problems with the complex problem domain and in using irregularly distributed nodes. Because of its distinguished advantages, the meshless method has good potential in simulation of FPDEs. This paper aims to develop an implicit meshless collocation technique for FPDE. The discrete system of FPDEs is obtained by using the meshless shape functions and the meshless collocation formulation. The stability and convergence of this meshless approach are investigated theoretically and numerically. The numerical examples with regular and irregular nodal distributions are used to validate and investigate accuracy and efficiency of the newly developed meshless formulation. It is concluded that the present meshless formulation is very effective for the modeling and simulation of fractional partial differential equations.
Resumo:
This paper formulates a node-based smoothed conforming point interpolation method (NS-CPIM) for solid mechanics. In the proposed NS-CPIM, the higher order conforming PIM shape functions (CPIM) have been constructed to produce a continuous and piecewise quadratic displacement field over the whole problem domain, whereby the smoothed strain field was obtained through smoothing operation over each smoothing domain associated with domain nodes. The smoothed Galerkin weak form was then developed to create the discretized system equations. Numerical studies have demonstrated the following good properties: NS-CPIM (1) can pass both standard and quadratic patch test; (2) provides an upper bound of strain energy; (3) avoid the volumetric locking; (4) provides the higher accuracy than those in the node-based smoothed schemes of the original PIMs.
Resumo:
Thin solid films were extensively used in the making of solar cells, cutting tools, magnetic recording devices, etc. As a result, the accurate measurement of mechanical properties of the thin films, such as hardness and elastic modulus, was required. The thickness of thin films normally varies from tens of nanometers to several micrometers. It is thus challenging to measure their mechanical properties. In this study, a nanoscratch method was proposed for hardness measurement. A three-dimensional finite element method (3-D FEM) model was developed to validate the nanoscratch method and to understand the substrate effect during nanoscratch. Nanoindentation was also used for comparison. The nanoscratch method was demonstrated to be valuable for measuring hardness of thin solid films.
Resumo:
In this article, an enriched radial point interpolation method (e-RPIM) is developed for computational mechanics. The conventional radial basis function (RBF) interpolation is novelly augmented by the suitable basis functions to reflect the natural properties of deformation. The performance of the enriched meshless RBF shape functions is first investigated using the surface fitting. The surface fitting results have proven that, compared with the conventional RBF, the enriched RBF interpolation has a much better accuracy to fit a complex surface than the conventional RBF interpolation. It has proven that the enriched RBF shape function will not only possess all advantages of the conventional RBF interpolation, but also can accurately reflect the deformation properties of problems. The system of equations for two-dimensional solids is then derived based on the enriched RBF shape function and both of the meshless strong-form and weak-form. A numerical example of a bar is presented to study the effectiveness and efficiency of e-RPIM. As an important application, the newly developed e-RPIM, which is augmented by selected trigonometric basis functions, is applied to crack problems. It has been demonstrated that the present e-RPIM is very accurate and stable for fracture mechanics problems.