103 resultados para Modelling lifetime data
Resumo:
Human occupants within indoor environments are not always stationary and their movement will lead to temporal channel variations that strongly affect the quality of indoor wireless communication systems. This paper describes a statistical channel characterization, based on experimental measurements, of human body effects on line-of-sight indoor narrowband propagation at 5.2 GHz. The analysis shows that, as the number of pedestrians within the measurement location increases, the Ricean K-factor that best fits the empirical data tends to decrease proportionally, ranging from K=7 with 1 pedestrian to K=0 with 4 pedestrians. Level crossing rate results were Rice distributed, while average fade duration results were significantly higher than theoretically computed Rice and Rayleigh, due to the fades caused by pedestrians. A novel CDF that accurately characterizes the 5.2 GHz channel in the considered indoor environment is proposed. For the first time, the received envelope CDF is explicitly described in terms of a quantitative measurement of pedestrian traffic within the indoor environment.
Resumo:
The momentum term has long been used in machine learning algorithms, especially back-propagation, to improve their speed of convergence. In this paper, we derive an expression to prove the O(1/k2) convergence rate of the online gradient method, with momentum type updates, when the individual gradients are constrained by a growth condition. We then apply these type of updates to video background modelling by using it in the update equations of the Region-based Mixture of Gaussians algorithm. Extensive evaluations are performed on both simulated data, as well as challenging real world scenarios with dynamic backgrounds, to show that these regularised updates help the mixtures converge faster than the conventional approach and consequently improve the algorithm’s performance.
Resumo:
Multivariate classification techniques have proven to be powerful tools for distinguishing experimental conditions in single sessions of functional magnetic resonance imaging (fMRI) data. But they are vulnerable to a considerable penalty in classification accuracy when applied across sessions or participants, calling into question the degree to which fine-grained encodings are shared across subjects. Here, we introduce joint learning techniques, where feature selection is carried out using a held-out subset of a target dataset, before training a linear classifier on a source dataset. Single trials of functional MRI data from a covert property generation task are classified with regularized regression techniques to predict the semantic class of stimuli. With our selection techniques (joint ranking feature selection (JRFS) and disjoint feature selection (DJFS)), classification performance during cross-session prediction improved greatly, relative to feature selection on the source session data only. Compared with JRFS, DJFS showed significant improvements for cross-participant classification. And when using a groupwise training, DJFS approached the accuracies seen for prediction across different sessions from the same participant. Comparing several feature selection strategies, we found that a simple univariate ANOVA selection technique or a minimal searchlight (one voxel in size) is appropriate, compared with larger searchlights.
Resumo:
Quantum yields of the photocatalytic degradation of methyl orange under controlled periodic illumination (CPI) have been modelled using existing models. A modified Langmuir-Hinshelwood (L-H) rate equation was used to predict the degradation reaction rates of methyl orange at various duty cycles and a simple photocatalytic model was applied in modelling quantum yield enhancement of the photocatalytic process due to the CPI effect. A good agreement between the modelled and experimental data was observed for quantum yield modelling. The modified L-H model, however, did not accurately predict the photocatalytic decomposition of the dye under periodic illumination.
Resumo:
Aims: In this paper we aim to investigate the evolution of plasmaproperties and Stokes parameters in photospheric magnetic bright pointsusing 3D magneto-hydrodynamical simulations and radiative diagnostics ofsolar granulation.
Methods: Simulated time-dependent radiationparameters and plasma properties were investigated throughout theevolution of a bright point. Synthetic Stokes profiles for the FeI630.25 nm line were calculated, which also allowed the evolution of theStokes-I line strength and Stokes-V area and amplitude asymmetries to beinvestigated.
Results: Our results are consistent withtheoretical predictions and published observations describing convectivecollapse, and confirm this as the bright point formation process.Through degradation of the simulated data to match the spatialresolution of SOT, we show that high spatial resolution is crucial forthe detection of changing spectro-polarimetric signatures throughout amagnetic bright point's lifetime. We also show that the signaturedownflow associated with the convective collapse process tends towardszero as the radiation intensity in the bright point peaks, because ofthe magnetic forces present restricting the flow of material in the fluxtube.
Resumo:
We have developed a model to predict the post-collision brightness increase of sub-catastrophic collisions between asteroids and to evaluate the likelihood of a survey detecting these events. It is based on the cratering scaling laws of Holsapple and Housen (2007) and models the ejecta expansion following an impact as occurring in discrete shells each with their own velocity. We estimate the magnitude change between a series of target/impactor pairs, as- suming it is given by the increase in reflecting surface area within a photometric aperture due to the resulting ejecta. As expected the photometric signal increases with impactor size, but we find also that the photometric signature decreases rapidly as the target aster- oid diameter increases, due to gravitational fallback. We have used the model results to make an estimate of the impactor diameter for the (596) Scheila collision of D = 49 − 65m depending on the impactor taxonomy, which is broadly consistent with previous estimates. We varied both the strength regime (highly porous and sand/cohesive soil) and the tax- onomic type (S-, C- and D-type) to examine the effect on the magnitude change, finding that it is significant at early stages but has only a small effect on the overall lifetime of the photometric signal. Combining the results of this model with the collision frequency estimates of Bottke et al. (2005), we find that low-cadence surveys of ∼one visit per luna- tion will be insensitive to impacts on asteroids with D < 20km if relying on photometric detections.
Resumo:
One of the main purposes of building a battery model is for monitoring and control during battery charging/discharging as well as for estimating key factors of batteries such as the state of charge for electric vehicles. However, the model based on the electrochemical reactions within the batteries is highly complex and difficult to compute using conventional approaches. Radial basis function (RBF) neural networks have been widely used to model complex systems for estimation and control purpose, while the optimization of both the linear and non-linear parameters in the RBF model remains a key issue. A recently proposed meta-heuristic algorithm named Teaching-Learning-Based Optimization (TLBO) is free of presetting algorithm parameters and performs well in non-linear optimization. In this paper, a novel self-learning TLBO based RBF model is proposed for modelling electric vehicle batteries using RBF neural networks. The modelling approach has been applied to two battery testing data sets and compared with some other RBF based battery models, the training and validation results confirm the efficacy of the proposed method.
Resumo:
Organic Rankine Cycle (ORC) is the most commonly used method for recovering energy from small sources of heat. The investigation of the ORC in supercritical condition is a new research area as it has a potential to generate high power and thermal efficiency in a waste heat recovery system. This paper presents a steady state ORC model in supercritical condition and its simulations with a real engine’s exhaust data. The key component of ORC, evaporator, is modelled using finite volume method, modelling of all other components of the waste heat recovery system such as pump, expander and condenser are also presented. The aim of this paper is to investigate the effects of mass flow rate and evaporator outlet temperature on the efficiency of the waste heat recovery process. Additionally, the necessity of maintaining an optimum evaporator outlet temperature is also investigated. Simulation results show that modification of mass flow rate is the key to changing the operating temperature at the evaporator outlet.
Resumo:
This paper details the theory and implementation of a composite damage model, addressing damage within a ply (intralaminar) and delamination (interlaminar), for the simulation of crushing of laminated composite structures. It includes a more accurate determination of the characteristic length to achieve mesh objectivity in capturing intralaminar damage consisting of matrix cracking and fibre failure, a load-history dependent material response, an isotropic hardening nonlinear matrix response, as well as a more physically-based interactive matrix-dominated damage mechanism. The developed damage model requires a set of material parameters obtained from a combination of standard and non-standard material characterisation tests. The fidelity of the model mitigates the need to manipulate, or "calibrate", the input data to achieve good agreement with experimental results. The intralaminar damage model was implemented as a VUMAT subroutine, and used in conjunction with an existing interlaminar damage model, in Abaqus/Explicit. This approach was validated through the simulation of the crushing of a cross-ply composite tube with a tulip-shaped trigger, loaded in uniaxial compression. Despite the complexity of the chosen geometry, excellent correlation was achieved with experimental results.
Predicting the crushing behaviour of composite material using high-fidelity finite element modelling
Resumo:
The capability to numerically model the crushing behaviour of composite structures will enable the efficient design of structures with high specific energy absorption capacity. This is particularly relevant to the aerospace and automotive industries where cabin structures need to be shown to be crashworthy. In this paper, a three-dimensional damage model is presented, which accurately represents the behaviour of composite laminates under crush loading. Both intralaminar and interlaminar failure mechanisms are taken into account. The crush damage model was implemented in ABAQUS/Explicit as a VUMAT subroutine. Numerical predictions are shown to agree well with experimental results, accurately capturing the intralaminar and interlaminar damage for a range of stacking sequences, triggers and composite materials. The use of measured material parameters required by the numerical models, without the need to ‘calibrate’ this input data, demonstrates this computational tool's predictive capabilities
Resumo:
Diagnostic test sensitivity and specificity are probabilistic estimates with far reaching implications for disease control, management and genetic studies. In the absence of 'gold standard' tests, traditional Bayesian latent class models may be used to assess diagnostic test accuracies through the comparison of two or more tests performed on the same groups of individuals. The aim of this study was to extend such models to estimate diagnostic test parameters and true cohort-specific prevalence, using disease surveillance data. The traditional Hui-Walter latent class methodology was extended to allow for features seen in such data, including (i) unrecorded data (i.e. data for a second test available only on a subset of the sampled population) and (ii) cohort-specific sensitivities and specificities. The model was applied with and without the modelling of conditional dependence between tests. The utility of the extended model was demonstrated through application to bovine tuberculosis surveillance data from Northern and the Republic of Ireland. Simulation coupled with re-sampling techniques, demonstrated that the extended model has good predictive power to estimate the diagnostic parameters and true herd-level prevalence from surveillance data. Our methodology can aid in the interpretation of disease surveillance data, and the results can potentially refine disease control strategies.
Resumo:
This paper presents a novel method of audio-visual fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there is a limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new representation and a modified cosine similarity are introduced for combining and comparing bimodal features with limited training data as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal data set created from the SPIDRE and AR databases with variable noise corruption of speech and occlusion in the face images. The new method has demonstrated improved recognition accuracy.
Resumo:
Traditional experimental economics methods often consume enormous resources of qualified human participants, and the inconsistence of a participant’s decisions among repeated trials prevents investigation from sensitivity analyses. The problem can be solved if computer agents are capable of generating similar behaviors as the given participants in experiments. An experimental economics based analysis method is presented to extract deep information from questionnaire data and emulate any number of participants. Taking the customers’ willingness to purchase electric vehicles (EVs) as an example, multi-layer correlation information is extracted from a limited number of questionnaires. Multi-agents mimicking the inquired potential customers are modelled through matching the probabilistic distributions of their willingness embedded in the questionnaires. The authenticity of both the model and the algorithm is validated by comparing the agent-based Monte Carlo simulation results with the questionnaire-based deduction results. With the aid of agent models, the effects of minority agents with specific preferences on the results are also discussed.
Resumo:
Extrusion is one of the major methods for processing polymeric materials and the thermal homogeneity of the process output is a major concern for manufacture of high quality extruded products. Therefore, accurate process thermal monitoring and control are important for product quality control. However, most industrial extruders use single point thermocouples for the temperature monitoring/control although their measurements are highly affected by the barrel metal wall temperature. Currently, no industrially established thermal profile measurement technique is available. Furthermore, it has been shown that the melt temperature changes considerably with the die radial position and hence point/bulk measurements are not sufficient for monitoring and control of the temperature across the melt flow. The majority of process thermal control methods are based on linear models which are not capable of dealing with process nonlinearities. In this work, the die melt temperature profile of a single screw extruder was monitored by a thermocouple mesh technique. The data obtained was used to develop a novel approach of modelling the extruder die melt temperature profile under dynamic conditions (i.e. for predicting the die melt temperature profile in real-time). These newly proposed models were in good agreement with the measured unseen data. They were then used to explore the effects of process settings, material and screw geometry on the die melt temperature profile. The results showed that the process thermal homogeneity was affected in a complex manner by changing the process settings, screw geometry and material.
Resumo:
An evaluation of existing 1-D vaneless diffuser design tools in the context of improving the off-design performance prediction of automotive turbocharger centrifugal compressors is described. A combination of extensive gas stand test data and single passage CFD simulations have been employed in order to permit evaluation of the different methods, allowing conclusions about the relative benefits and deficiencies of each of the different approaches to be determined. The vaneless diffuser itself has been isolated from the incumbent limitations in the accuracy of 1-D impeller modelling tools through development of a method to fully specify impeller exit conditions (in terms of mean quantities) using only standard test stand data with additional interstage static pressure measurements at the entrance to the diffuser. This method allowed a direct comparison between the test data and 1-D methods through sharing common inputs, thus achieving the aim of diffuser isolation.
Crucial to the accuracy of determining the performance of each of the vaneless diffuser configurations was the ability to quantify the presence and extent of the spanwise aerodynamic blockage present at the diffuser inlet section. A method to evaluate this critical parameter using CFD data is described herein, along with a correlation for blockage related to a new diffuser inlet flow parameter ⚡, equal to the quotient of the local flow coefficient and impeller tip speed Mach number. The resulting correlation permitted the variation of blockage with operating condition to be captured.