1000 resultados para HFR method
Resumo:
Background Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. Methodology/Principal Findings A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. Conclusions It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method. It is freely available at http://bioinformatics.awowshop.com/snlpred_page.php.
Resumo:
This paper presents a method for investigating ship emissions, the plume capture and analysis system (PCAS), and its application in measuring airborne pollutant emission factors (EFs) and particle size distributions. The current investigation was conducted in situ, aboard two dredgers (Amity: a cutter suction dredger and Brisbane: a hopper suction dredger) but the PCAS is also capable of performing such measurements remotely at a distant point within the plume. EFs were measured relative to the fuel consumption using the fuel combustion derived plume CO2. All plume measurements were corrected by subtracting background concentrations sampled regularly from upwind of the stacks. Each measurement typically took 6 minutes to complete and during one day, 40 to 50 measurements were possible. The relationship between the EFs and plume sample dilution was examined to determine the plume dilution range over which the technique could deliver consistent results when measuring EFs for particle number (PN), NOx, SO2, and PM2.5 within a targeted dilution factor range of 50-1000 suitable for remote sampling. The EFs for NOx, SO2, and PM2.5 were found to be independent of dilution, for dilution factors within that range. The EF measurement for PN was corrected for coagulation losses by applying a time dependant particle loss correction to the particle number concentration data. For the Amity, the EF ranges were PN: 2.2 - 9.6 × 1015 (kg-fuel)-1; NOx: 35-72 g(NO2).(kg-fuel)-1, SO2 0.6 - 1.1 g(SO2).(kg-fuel)-1and PM2.5: 0.7 – 6.1 g(PM2.5).(kg-fuel)-1. For the Brisbane they were PN: 1.0 – 1.5 x 1016 (kg-fuel)-1, NOx: 3.4 – 8.0 g(NO2).(kg-fuel)-1, SO2: 1.3 – 1.7 g(SO2).(kg-fuel)-1 and PM2.5: 1.2 – 5.6 g(PM2.5).(kg-fuel)-1. The results are discussed in terms of the operating conditions of the vessels’ engines. Particle number emission factors as a function of size as well as the count median diameter (CMD), and geometric standard deviation of the size distributions are provided. The size distributions were found to be consistently uni-modal in the range below 500 nm, and this mode was within the accumulation mode range for both vessels. The representative CMDs for the various activities performed by the dredgers ranged from 94-131 nm in the case of the Amity, and 58-80 nm for the Brisbane. A strong inverse relationship between CMD and EF(PN) was observed.
Resumo:
An analytical method for the detection of carbonaceous gases by a non-dispersive infrared sensor (NDIR) has been developed. The calibration plots of six carbonaceous gases including CO2, CH4, CO, C2H2, C2H4 and C2H6 were obtained and the reproducibility determined to verify the feasibility of this gas monitoring method. The results prove that squared correlation coefficients for the six gas measurements are greater than 0.999. The reproducibility is excellent, thus indicating that this analytical method is useful to determinate the concentrations of carbonaceous gases.
Resumo:
Restoring a large-scale power system has always been a complicated and important issue. A lot of research work has been done on different aspects of the whole power system restoration procedure. However, more time will be required to complete the power system restoration process in an actual situation if accurate and real-time system data cannot be obtained. With the development of the wide area monitoring system (WAMS), power system operators are capable of accessing to more accurate data in the restoration stage after a major outage. The ultimate goal of the system restoration is to restore as much load as possible while in the shortest period of time after a blackout, and the restorable load can be estimated by employing WAMS. Moreover, discrete restorable loads are employed considering the limited number of circuit-breaker operations and the practical topology of distribution systems. In this work, a restorable load estimation method is proposed employing WAMS data after the network frame has been reenergized, and WAMS is also employed to monitor the system parameters in case the newly recovered system becomes unstable again. The proposed method has been validated with the New England 39-Bus system and an actual power system in Guangzhou, China.
Resumo:
This paper presents the direct strength method (DSM) equations for cold-formed steel beams subject to shear. Light gauge cold-formed steel sections have been developed as more economical building solutions to the alternative heavier hot-rolled sections in the commercial and residential markets. Cold-formed lipped channel beams (LCB), LiteSteel beams (LSB) and hollow flange beams (HFB) are commonly used as flexural members such as floor joists and bearers. However, their shear capacities are determined based on conservative design rules. For the shear design of cold-formed web panels, their elastic shear buckling strength must be determined accurately including the potential post-buckling strength. Currently the elastic shear buckling coefficients of web panels are determined by assuming conservatively that the web panels are simply supported at the junction between the flange and web elements and ignore the post-buckling strength. Hence experimental and numerical studies were conducted to investigate the shear behaviour and strength of LSBs, LCBs and HFBs. New direct strength method (DSM) based design equations were proposed to determine the ultimate shear capacities of cold-formed steel beams. An improved equation for the higher elastic shear buckling coefficient of cold-formed steel beams was proposed based on finite element analysis results and included in the DSM design equations. A new post-buckling coefficient was also introduced in the DSM equation to include the available post-buckling strength of cold-formed steel beams.
Resumo:
An online secondary path modelling method using a white noise as a training signal is required in many applications of active noise control (ANC) to ensure convergence of the system. Not continually injection of white noise during system operation makes the system more desirable. The purposes of the proposed method are two folds: controlling white noise by preventing continually injection, and benefiting white noise with a larger variance. The modelling accuracy and the convergence rate increase when a white noise with larger variance is used, however larger the variance increases the residual noise, which decreases performance of the system. This paper proposes a new approach for online secondary path modelling in feedfoward ANC systems. The proposed algorithm uses the advantages of the white noise with larger variance to model the secondary path, but the injection is stopped at the optimum point to increase performance of the system. Comparative simulation results shown in this paper indicate effectiveness of the proposed approach in controlling active noise.
Resumo:
An investigation on hydrogen and methane sensing performance of hydrothermally formed niobium tungsten oxide nanorods employed in a Schottky diode structure is presented herein. By implementing tungsten into the surface of the niobium lattice, we create Nb5+ and W5+ oxide states and an abundant number of surface traps, which can collect and hold the adsorbate charge to reinforce a greater bending of the energy bands at the metal/oxide interface. We show experimentally, that extremely large voltage shifts can be achieved by these nanorods under exposure to gas at both room and high temperatures and attribute this to the strong accumulation of the dipolar charges at the interface via the surface traps. Thus, our results demonstrate that niobium tungsten oxide nanorods can be implemented for gas sensing applications, showing ultra-high sensitivities.
Resumo:
Damage assessment (damage detection, localization and quantification) in structures and appropriate retrofitting will enable the safe and efficient function of the structures. In this context, many Vibration Based Damage Identification Techniques (VBDIT) have emerged with potential for accurate damage assessment. VBDITs have achieved significant research interest in recent years, mainly due to their non-destructive nature and ability to assess inaccessible and invisible damage locations. Damage Index (DI) methods are also vibration based, but they are not based on the structural model. DI methods are fast and inexpensive compared to the model-based methods and have the ability to automate the damage detection process. DI method analyses the change in vibration response of the structure between two states so that the damage can be identified. Extensive research has been carried out to apply the DI method to assess damage in steel structures. Comparatively, there has been very little research interest in the use of DI methods to assess damage in Reinforced Concrete (RC) structures due to the complexity of simulating the predominant damage type, the flexural crack. Flexural cracks in RC beams distribute non- linearly and propagate along all directions. Secondary cracks extend more rapidly along the longitudinal and transverse directions of a RC structure than propagation of existing cracks in the depth direction due to stress distribution caused by the tensile reinforcement. Simplified damage simulation techniques (such as reductions in the modulus or section depth or use of rotational spring elements) that have been extensively used with research on steel structures, cannot be applied to simulate flexural cracks in RC elements. This highlights a big gap in knowledge and as a consequence VBDITs have not been successfully applied to damage assessment in RC structures. This research will address the above gap in knowledge and will develop and apply a modal strain energy based DI method to assess damage in RC flexural members. Firstly, this research evaluated different damage simulation techniques and recommended an appropriate technique to simulate the post cracking behaviour of RC structures. The ABAQUS finite element package was used throughout the study with properly validated material models. The damaged plasticity model was recommended as the method which can correctly simulate the post cracking behaviour of RC structures and was used in the rest of this study. Four different forms of Modal Strain Energy based Damage Indices (MSEDIs) were proposed to improve the damage assessment capability by minimising the numbers and intensities of false alarms. The developed MSEDIs were then used to automate the damage detection process by incorporating programmable algorithms. The developed algorithms have the ability to identify common issues associated with the vibration properties such as mode shifting and phase change. To minimise the effect of noise on the DI calculation process, this research proposed a sequential order of curve fitting technique. Finally, a statistical based damage assessment scheme was proposed to enhance the reliability of the damage assessment results. The proposed techniques were applied to locate damage in RC beams and slabs on girder bridge model to demonstrate their accuracy and efficiency. The outcomes of this research will make a significant contribution to the technical knowledge of VBDIT and will enhance the accuracy of damage assessment in RC structures. The application of the research findings to RC flexural members will enable their safe and efficient performance.
Resumo:
This paper proposes a new method for online secondary path modeling in feedback active noise control (ANC) systems. In practical cases, the secondary path is usually time varying. For these cases, online modeling of secondary path is required to ensure convergence of the system. In literature the secondary path estimation is usually performed offline, prior to online modeling, where in the proposed system there is no need for using offline estimation. The proposed method consists of two steps: a noise controller which is based on an FxLMS algorithm, and a variable step size (VSS) LMS algorithm which is used to adapt the modeling filter with the secondary path. In order to increase performance of the algorithm in a faster convergence and accurate performance, we stop the VSS-LMS algorithm at the optimum point. The results of computer simulation shown in this paper indicate effectiveness of the proposed method.
Resumo:
We directly constructed reduced graphene oxide–titanium oxide nanotube (RGO–TNT) film using a single-step, combined electrophoretic deposition–anodization (CEPDA) method. This method, based on the simultaneous anodic growth of tubular TiO2 and the electrophoretic-driven motion of RGO, allowed the formation of an effective interface between the two components, thus improving the electron transfer kinetics. Composites of these graphitic carbons with different levels of oxygen-containing groups, electron conductivity and interface reaction time were investigated; a fine balance of these parameters was achieved.
Resumo:
In order to develop more inclusive products and services, designers need a means of assessing the inclusivity of existing products and new concepts. Following previous research on the development of scales for inclusive design at University of Cambridge, Engineering Design Centre (EDC) [1], this paper presents the latest version of the exclusion audit method. For a specific product interaction, this estimates the proportion of the Great British population who would be excluded from using a product or service, due to the demands the product places on key user capabilities. A critical part of the method involves rating of the level of demand placed by a task on a range of key user capabilities, so the procedure to perform this assessment was operationalised and then its reliability was tested with 31 participants. There was no evidence that participants rated the same demands consistently. The qualitative results from the experiment suggest that the consistency of participants’ demand level ratings could be significantly improved if the audit materials and their instructions better guided the participant through the judgement process.
Resumo:
Recent empirical studies of gender discrimination point to the importance of accurately controlling for accumulated labour market experience. Unfortunately in Australia, most data sets do not include information on actual experience. The current paper using data from the National Social Science Survey 1984, examines the efficacy of imputing female labour market experience via the Zabalza and Arrufat (1985) method. The results suggest that the method provides a more accurate measure of experience than that provided by the traditional Mincer proxy. However, the imputation method is sensitive to the choice of identification restrictions. We suggest a novel alternative to a choice between arbitrary restrictions.
Resumo:
The space and time fractional Bloch–Torrey equation (ST-FBTE) has been used to study anomalous diffusion in the human brain. Numerical methods for solving ST-FBTE in three-dimensions are computationally demanding. In this paper, we propose a computationally effective fractional alternating direction method (FADM) to overcome this problem. We consider ST-FBTE on a finite domain where the time and space derivatives are replaced by the Caputo–Djrbashian and the sequential Riesz fractional derivatives, respectively. The stability and convergence properties of the FADM are discussed. Finally, some numerical results for ST-FBTE are given to confirm our theoretical findings.
Resumo:
A quasi-maximum likelihood procedure for estimating the parameters of multi-dimensional diffusions is developed in which the transitional density is a multivariate Gaussian density with first and second moments approximating the true moments of the unknown density. For affine drift and diffusion functions, the moments are exactly those of the true transitional density and for nonlinear drift and diffusion functions the approximation is extremely good and is as effective as alternative methods based on likelihood approximations. The estimation procedure generalises to models with latent factors. A conditioning procedure is developed that allows parameter estimation in the absence of proxies.
Resumo:
Biological systems involving proliferation, migration and death are observed across all scales. For example, they govern cellular processes such as wound-healing, as well as the population dynamics of groups of organisms. In this paper, we provide a simplified method for correcting mean-field approximations of volume-excluding birth-death-movement processes on a regular lattice. An initially uniform distribution of agents on the lattice may give rise to spatial heterogeneity, depending on the relative rates of proliferation, migration and death. Many frameworks chosen to model these systems neglect spatial correlations, which can lead to inaccurate predictions of their behaviour. For example, the logistic model is frequently chosen, which is the mean-field approximation in this case. This mean-field description can be corrected by including a system of ordinary differential equations for pair-wise correlations between lattice site occupancies at various lattice distances. In this work we discuss difficulties with this method and provide a simplication, in the form of a partial differential equation description for the evolution of pair-wise spatial correlations over time. We test our simplified model against the more complex corrected mean-field model, finding excellent agreement. We show how our model successfully predicts system behaviour in regions where the mean-field approximation shows large discrepancies. Additionally, we investigate regions of parameter space where migration is reduced relative to proliferation, which has not been examined in detail before, and our method is successful at correcting the deviations observed in the mean-field model in these parameter regimes.