20 resultados para Optimized using

em CentAUR: Central Archive University of Reading - UK


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper we present a compliant neural interface designed to record bladder afferent activity. We developed the implant's microfabrication process using multiple layers of silicone rubber and thin metal so that a gold microelectrode array is embedded within four parallel polydimethylsiloxane (PDMS) microchannels (5 mm long, 100 μm wide, 100 μm deep). Electrode impedance at 1 kHz was optimized using a reactive ion etching (RIE) step, which increased the porosity of the electrode surface. The electrodes did not deteriorate after a 3 month immersion in phosphate buffered saline (PBS) at 37 °C. Due to the unique microscopic topography of the metal film on PDMS, the electrodes are extremely compliant and can withstand handling during implantation (twisting and bending) without electrical failure. The device was transplanted acutely to anaesthetized rats, and strands of the dorsal branch of roots L6 and S1 were surgically teased and inserted in three microchannels under saline immersion to allow for simultaneous in vivo recordings in an acute setting. We utilized a tripole electrode configuration to maintain background noise low and improve the signal to noise ratio. The device could distinguish two types of afferent nerve activity related to increasing bladder filling and contraction. To our knowledge, this is the first report of multichannel recordings of bladder afferent activity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we propose an efficient two-level model identification method for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularization parameters in the elastic net are optimized using a particle swarm optimization (PSO) algorithm at the upper level by minimizing the leave one out (LOO) mean square error (LOOMSE). Illustrative examples are included to demonstrate the effectiveness of the new approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we propose a novel online modeling algorithm for nonlinear and nonstationary systems using a radial basis function (RBF) neural network with a fixed number of hidden nodes. Each of the RBF basis functions has a tunable center vector and an adjustable diagonal covariance matrix. A multi-innovation recursive least square (MRLS) algorithm is applied to update the weights of RBF online, while the modeling performance is monitored. When the modeling residual of the RBF network becomes large in spite of the weight adaptation, a node identified as insignificant is replaced with a new node, for which the tunable center vector and diagonal covariance matrix are optimized using the quantum particle swarm optimization (QPSO) algorithm. The major contribution is to combine the MRLS weight adaptation and QPSO node structure optimization in an innovative way so that it can track well the local characteristic in the nonstationary system with a very sparse model. Simulation results show that the proposed algorithm has significantly better performance than existing approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The trans-[Cu2L2Cl2] (1), and cis-[Cu2L2Cl2]·H2O (2) isomers of a diphenoxido bridged Cu2O2 core have been synthesized using a tridentate reduced Schiff base ligand 2-[(2-dimethylamino-ethylamino)-methyl]-phenol. The geometry around Cu(II) is intermediate between square pyramid and trigonal bipyramid (Addison parameter, tau = 0.463) in 1 but nearly square pyramidal (tau = 0.049) in 2. The chloride ions are coordinated to Cu(II) and are trans oriented in 1 but cis oriented in 2. Both isomers have been optimized using density functional theory (DFT) calculations and it is found that the trans isomer is 7.2 kcal mol(-1) more favorable than the cis isomer. However, the hydrogen bonding interaction of crystallized water molecule with chloride ions compensates for the energy difference and stabilizes the cis isomer. Both complexes have been converted to a very rare phenoxido-azido bridged trinuclear species, [Cu3L2(mu(1,1)-N-3)(2)(H2O)(2)(ClO4)(2)] (3) which has also been characterized structurally. All the complexes are antiferromagnetically coupled but the magnitude of the coupling constants are significantly different (J = -156.60, -652.31, and -31.54 cm(-1) for 1, 2, and 3 respectively). Density functional theory (DFT) calculations have also been performed to gain further insight into the qualitative theoretical interpretation on the overall magnetic behavior of the complexes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model's generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Whole-genome transcriptome profiling is revealing how biological systems are regulated at the transcriptional level. This study reports the development of a robust method to profile and compare the transcriptomes of two nonmodel plant species, Thlaspi caerulescens, a zinc (Zn) hyperaccumulator, and Thlaspi arvense, a nonhyperaccumulator, using Affymetrix Arabidopsis thaliana ATH1-121501 GeneChip (R) arrays (Affymetrix, Santa Clara, CA, USA). Transcript abundance was quantified in the shoots of agar- and compost-grown plants of both species. Analyses were optimized using a genomic DNA (gDNA)-based probe-selection strategy based on the hybridization efficiency of Thlaspi gDNA with corresponding A. thaliana probes. In silico alignments of GeneChip (R) probes with Thlaspi gene sequences, and quantitative real-time PCR, confirmed the validity of this approach. Approximately 5000 genes were differentially expressed in the shoots of T. caerulescens compared with T. arvense, including genes involved in Zn transport and compartmentalization. Future functional analyses of genes identified as differentially expressed in the shoots of these closely related species will improve our understanding of the molecular mechanisms of Zn hyperaccumulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study has compared preliminary estimates of effective leaf area index (LAI) derived from fish-eye lens photographs to those estimated from airborne full-waveform small-footprint LiDAR data for a forest dataset in Australia. The full-waveform data was decomposed and optimized using a trust-region-reflective algorithm to extract denser point clouds. LAI LiDAR estimates were derived in two ways (1) from the probability of discrete pulses reaching the ground without being intercepted (point method) and (2) from raw waveform canopy height profile processing adapted to small-footprint laser altimetry (waveform method) accounting for reflectance ratio between vegetation and ground. The best results, that matched hemispherical photography estimates, were achieved for the waveform method with a study area-adjusted reflectance ratio of 0.4 (RMSE of 0.15 and 0.03 at plot and site level, respectively). The point method generally overestimated, whereas the waveform method with an arbitrary reflectance ratio of 0.5 underestimated the fish-eye lens LAI estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A relatively simple, selective, precise and accurate high performance liquid chromatography (HPLC) method based on a reaction of phenylisothiocyanate (PITC) with glucosamine (GL) in alkaline media was developed and validated to determine glucosamine hydrochloride permeating through human skin in vitro. It is usually problematic to develop an accurate assay for chemicals traversing skin because the excellent barrier properties of the tissue ensure that only low amounts of the material pass through the membrane and skin components may leach out of the tissue to interfere with the analysis. In addition, in the case of glucosamine hydrochloride, chemical instability adds further complexity to assay development. The assay, utilising the PITC-GL reaction was refined by optimizing the reaction temperature, reaction time and PITC concentration. The reaction produces a phenylthiocarbarnyl-glucosamine (PTC-GL) adduct which was separated on a reverse-phase (RP) column packed with 5 mu m ODS (C-18) Hypersil particles using a diode array detector (DAD) at 245 nm. The mobile phase was methanol-water-glacial acetic acid (10:89.96:0.04 v/v/v, pH 3.5) delivered to the column at 1 ml min(-1) and the column temperature was maintained at 30 degrees C Using a saturated aqueous solution of glucosamine hydrochloride, in vitro permeation studies were performed at 32 +/- 1 degrees C over 48 h using human epidermal membranes prepared by a heat separation method and mounted in Franz-type diffusion cells with a diffusional area 2.15 +/- 0.1 cm(2). The optimum derivatisation reaction conditions for reaction temperature, reaction time and PITC concentration were found to be 80 degrees C, 30 min and 1 % v/v, respectively. PTC-Gal and GL adducts eluted at 8.9 and 9.7 min, respectively. The detector response was found to be linear in the concentration range 0-1000 mu g ml(-1). The assay was robust with intra- and inter-day precisions (described as a percentage of relative standard deviation, %R.S.D.) < 12. Intra- and inter-day accuracy (as a percentage of the relative error, %RE) was <=-5.60 and <=-8.00, respectively. Using this assay, it was found that GL-HCI permeates through human skin with a flux 1.497 +/- 0.42 mu g cm(-2) h(-1), a permeability coefficient of 5.66 +/- 1.6 x 10(-6) cm h(-1) and with a lag time of 10.9 +/- 4.6 h. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The correlated k-distribution (CKD) method is widely used in the radiative transfer schemes of atmospheric models and involves dividing the spectrum into a number of bands and then reordering the gaseous absorption coefficients within each one. The fluxes and heating rates for each band may then be computed by discretizing the reordered spectrum into of order 10 quadrature points per major gas and performing a monochromatic radiation calculation for each point. In this presentation it is shown that for clear-sky longwave calculations, sufficient accuracy for most applications can be achieved without the need for bands: reordering may be performed on the entire longwave spectrum. The resulting full-spectrum correlated k (FSCK) method requires significantly fewer monochromatic calculations than standard CKD to achieve a given accuracy. The concept is first demonstrated by comparing with line-by-line calculations for an atmosphere containing only water vapor, in which it is shown that the accuracy of heating-rate calculations improves approximately in proportion to the square of the number of quadrature points. For more than around 20 points, the root-mean-squared error flattens out at around 0.015 K/day due to the imperfect rank correlation of absorption spectra at different pressures in the profile. The spectral overlap of m different gases is treated by considering an m-dimensional hypercube where each axis corresponds to the reordered spectrum of one of the gases. This hypercube is then divided up into a number of volumes, each approximated by a single quadrature point, such that the total number of quadrature points is slightly fewer than the sum of the number that would be required to treat each of the gases separately. The gaseous absorptions for each quadrature point are optimized such that they minimize a cost function expressing the deviation of the heating rates and fluxes calculated by the FSCK method from line-by-line calculations for a number of training profiles. This approach is validated for atmospheres containing water vapor, carbon dioxide, and ozone, in which it is found that in the troposphere and most of the stratosphere, heating-rate errors of less than 0.2 K/day can be achieved using a total of 23 quadrature points, decreasing to less than 0.1 K/day for 32 quadrature points. It would be relatively straightforward to extend the method to include other gases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a new model-based proportional–integral–derivative (PID) tuning and controller approach is introduced for Hammerstein systems that are identified on the basis of the observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a B-spline neural network. The control signal is composed of a PID controller, together with a correction term. Both the parameters in the PID controller and the correction term are optimized on the basis of minimizing the multistep ahead prediction errors. In order to update the control signal, the multistep ahead predictions of the Hammerstein system based on B-spline neural networks and the associated Jacobian matrix are calculated using the de Boor algorithms, including both the functional and derivative recursions. Numerical examples are utilized to demonstrate the efficacy of the proposed approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[1] Decadal hindcast simulations of Arctic Ocean sea ice thickness made by a modern dynamic-thermodynamic sea ice model and forced independently by both the ERA-40 and NCEP/NCAR reanalysis data sets are compared for the first time. Using comprehensive data sets of observations made between 1979 and 2001 of sea ice thickness, draft, extent, and speeds, we find that it is possible to tune model parameters to give satisfactory agreement with observed data, thereby highlighting the skill of modern sea ice models, though the parameter values chosen differ according to the model forcing used. We find a consistent decreasing trend in Arctic Ocean sea ice thickness since 1979, and a steady decline in the Eastern Arctic Ocean over the full 40-year period of comparison that accelerated after 1980, but the predictions of Western Arctic Ocean sea ice thickness between 1962 and 1980 differ substantially. The origins of differing thickness trends and variability were isolated not to parameter differences but to differences in the forcing fields applied, and in how they are applied. It is argued that uncertainty, differences and errors in sea ice model forcing sets complicate the use of models to determine the exact causes of the recently reported decline in Arctic sea ice thickness, but help in the determination of robust features if the models are tuned appropriately against observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The CWRF is developed as a climate extension of the Weather Research and Forecasting model (WRF) by incorporating numerous improvements in the representation of physical processes and integration of external (top, surface, lateral) forcings that are crucial to climate scales, including interactions between land, atmosphere, and ocean; convection and microphysics; and cloud, aerosol, and radiation; and system consistency throughout all process modules. This extension inherits all WRF functionalities for numerical weather prediction while enhancing the capability for climate modeling. As such, CWRF can be applied seamlessly to weather forecast and climate prediction. The CWRF is built with a comprehensive ensemble of alternative parameterization schemes for each of the key physical processes, including surface (land, ocean), planetary boundary layer, cumulus (deep, shallow), microphysics, cloud, aerosol, and radiation, and their interactions. This facilitates the use of an optimized physics ensemble approach to improve weather or climate prediction along with a reliable uncertainty estimate. The CWRF also emphasizes the societal service capability to provide impactrelevant information by coupling with detailed models of terrestrial hydrology, coastal ocean, crop growth, air quality, and a recently expanded interactive water quality and ecosystem model. This study provides a general CWRF description and basic skill evaluation based on a continuous integration for the period 1979– 2009 as compared with that of WRF, using a 30-km grid spacing over a domain that includes the contiguous United States plus southern Canada and northern Mexico. In addition to advantages of greater application capability, CWRF improves performance in radiation and terrestrial hydrology over WRF and other regional models. Precipitation simulation, however, remains a challenge for all of the tested models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of microbial gene expression during host colonization provides valuable information on the nature of interaction, beneficial or pathogenic, and the adaptive processes involved. Isolation of bacterial mRNA for in planta analysis can be challenging where host nucleic acid may dominate the preparation, or inhibitory compounds affect downstream analysis, e.g., quantitative reverse transcriptase PCR (qPCR), microarray, or RNA-seq. The goal of this work was to optimize the isolation of bacterial mRNA of food-borne pathogens from living plants. Reported methods for recovery of phytopathogen-infected plant material, using hot phenol extraction and high concentration of bacterial inoculation or large amounts of infected tissues, were found to be inappropriate for plant roots inoculated with Escherichia coli O157:H7. The bacterial RNA yields were too low and increased plant material resulted in a dominance of plant RNA in the sample. To improve the yield of bacterial RNA and reduce the number of plants required, an optimized method was developed which combines bead beating with directed bacterial lysis using SDS and lysozyme. Inhibitory plant compounds, such as phenolics and polysaccharides, were counteracted with the addition of high-molecular-weight polyethylene glycol and hexadecyltrimethyl ammonium bromide. The new method increased the total yield of bacterial mRNA substantially and allowed assessment of gene expression by qPCR. This method can be applied to other bacterial species associated with plant roots, and also in the wider context of food safety.