818 resultados para Linear matrix inequalities (LMI) techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electroassisted encapsulation of Single-Walled Carbon Nanotubes was performed into silica matrices (SWCNT@SiO2). This material was used as the host for the potentiostatic growth of polyaniline (PANI) to yield a hybrid nanocomposite electrode, which was then characterized by both electrochemical and imaging techniques. The electrochemical properties of the SWCNT@SiO2-PANI composite material were tested against inorganic (Fe3+/Fe2+) and organic (dopamine) redox probes. It was observed that the electron transfer constants for the electrochemical reactions increased significantly when a dispersion of either SWCNT or PANI was carried out inside of the SiO2 matrix. However, the best results were obtained when polyaniline was grown through the pores of the SWCNT@SiO2 material. The enhanced reversibility of the redox reactions was ascribed to the synergy between the two electrocatalytic components (SWCNTs and PANI) of the composite material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we examine multi-objective linear programming problems in the face of data uncertainty both in the objective function and the constraints. First, we derive a formula for the radius of robust feasibility guaranteeing constraint feasibility for all possible scenarios within a specified uncertainty set under affine data parametrization. We then present numerically tractable optimality conditions for minmax robust weakly efficient solutions, i.e., the weakly efficient solutions of the robust counterpart. We also consider highly robust weakly efficient solutions, i.e., robust feasible solutions which are weakly efficient for any possible instance of the objective matrix within a specified uncertainty set, providing lower bounds for the radius of highly robust efficiency guaranteeing the existence of this type of solutions under affine and rank-1 objective data uncertainty. Finally, we provide numerically tractable optimality conditions for highly robust weakly efficient solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation was primarily engaged in the study of linear and organic perspective applied to the drawing of landscape, considering the perspective as a fundamental tool in order to graphically materialize sensory experiences offered by the landscape / place to be drawn. The methodology consisted initially in the investigation of perspective theories and perspective representation methods applied to landscape drawing, followed by practical application to a specific case. Thus, within the linear perspective were analyzed and explained: the visual framing, the methods of representation based on the descriptive geometry and also the design of shadows and reflections within the shadows. In the context of organic perspective were analyzed and described techniques utilizing depth of field, the color, or fading and overlapping and light-dark so as to add depth to the drawing. It was also explained a set of materials, printing techniques and resources, which by means of practical examples executed by different artists over time, show the perspectives’ drawings and application of theory. Finally, a set of original drawings was prepared in order to represent a place of a specific case, using for this purpose the theories and methods of linear and organic perspective, using different materials and printing techniques. The drawings were framed under the "project design", starting with the horizontal and vertical projections of a landscape architecture design to provide different views of the proposed space. It can be concluded that the techniques and methods described and exemplified, were suitable, with some adjustments, to the purpose it was intended, in particular in the landscape design conception, bringing to reality the pictorial sense world perceived by the human eye

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper describes a procedure for accurately and speedily calibrating tanks used for the chemical processing of nuclear materials. The procedure features the use of (1) precalibrated vessels certified to deliver known volumes of liquid, (2) calibrated linear measuring devices, and (3) a digital computer for manipulating data and producing printed calibration information. Calibration records of the standards are traceable to primary standards. Logic is incorporated in the computer program to accomplish curve fitting and perform the tests to accept or to reject the calibration, based on statistical, empirical, and report requirements. This logic is believed to be unique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, methods for computing D-optimal designs for population pharmacokinetic studies have become available. However there are few publications that have prospectively evaluated the benefits of D-optimality in population or single-subject settings. This study compared a population optimal design with an empirical design for estimating the base pharmacokinetic model for enoxaparin in a stratified randomized setting. The population pharmacokinetic D-optimal design for enoxaparin was estimated using the PFIM function (MATLAB version 6.0.0.88). The optimal design was based on a one-compartment model with lognormal between subject variability and proportional residual variability and consisted of a single design with three sampling windows (0-30 min, 1.5-5 hr and 11 - 12 hr post-dose) for all patients. The empirical design consisted of three sample time windows per patient from a total of nine windows that collectively represented the entire dose interval. Each patient was assigned to have one blood sample taken from three different windows. Windows for blood sampling times were also provided for the optimal design. Ninety six patients were recruited into the study who were currently receiving enoxaparin therapy. Patients were randomly assigned to either the optimal or empirical sampling design, stratified for body mass index. The exact times of blood samples and doses were recorded. Analysis was undertaken using NONMEM (version 5). The empirical design supported a one compartment linear model with additive residual error, while the optimal design supported a two compartment linear model with additive residual error as did the model derived from the full data set. A posterior predictive check was performed where the models arising from the empirical and optimal designs were used to predict into the full data set. This revealed the optimal'' design derived model was superior to the empirical design model in terms of precision and was similar to the model developed from the full dataset. This study suggests optimal design techniques may be useful, even when the optimized design was based on a model that was misspecified in terms of the structural and statistical models and when the implementation of the optimal designed study deviated from the nominal design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stabilizing selection has been predicted to change genetic variances and covariances so that the orientation of the genetic variance-covariance matrix (G) becomes aligned with the orientation of the fitness surface, but it is less clear how directional selection may change G. Here we develop statistical approaches to the comparison of G with vectors of linear and nonlinear selection. We apply these approaches to a set of male sexually selected cuticular hydrocarbons (CHCs) of Drosophila serrata. Even though male CHCs displayed substantial additive genetic variance, more than 99% of the genetic variance was orientated 74.9degrees away from the vector of linear sexual selection, suggesting that open-ended female preferences may greatly reduce genetic variation in male display traits. Although the orientation of G and the fitness surface were found to differ significantly, the similarity present in eigenstructure was a consequence of traits under weak linear selection and strong nonlinear ( convex) selection. Associating the eigenstructure of G with vectors of linear and nonlinear selection may provide a way of determining what long-term changes in G may be generated by the processes of natural and sexual selection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A rapid method has been developed for the quantification of the prototypic cyclotide kalata B I in water and plasma utilizing matrix-assisted laser desorption ionisation time-of-flight (MALDI-TOF) mass spectrometry. The unusual structure of the cyclotides means that they do not ionise as readily as linear peptides and as a result of their low ionisation efficiency, traditional LC/MS analyses were not able to reach the levels of detection required for the quantification of cyclotides in plasma for pharmacokinetic studies. MALDI-TOF-MS analysis showed linearity (R-2 > 0.99) in the concentration range 0.05-10 mu g/mL with a limit of detection of 0.05 mu g/mL (9 fmol) in plasma. This paper highlights the applicability of MALDI-TOF mass spectrometry for the rapid and sensitive quantification of peptides in biological samples without the need for extensive extraction procedures. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We compared growth rates of the lemon shark, Negaprion brevirostris, from Bimini, Bahamas and the Marquesas Keys (MK), Florida using data obtained in a multi-year annual census. We marked new neonate and juvenile sharks with unique electronic identity tags in Bimini and in the MK we tagged neonate and juvenile sharks. Sharks were tagged with tiny, subcutaneous transponders, a type of tagging thought to cause little, if any disruption to normal growth patterns when compared to conventional external tagging. Within the first 2 years of this project, no age data were recorded for sharks caught for the first time in Bimini. Therefore, we applied and tested two methods of age analysis: ( 1) a modified 'minimum convex polygon' method and ( 2) a new age-assigning method, the 'cut-off technique'. The cut-off technique proved to be the more suitable one, enabling us to identify the age of 134 of the 642 previously unknown aged sharks. This maximised the usable growth data included in our analysis. Annual absolute growth rates of juvenile, nursery-bound lemon sharks were almost constant for the two Bimini nurseries and can be best described by a simple linear model ( growth data was only available for age-0 sharks in the MK). Annual absolute growth for age-0 sharks was much greater in the MK than in either the North Sound (NS) and Shark Land (SL) at Bimini. Growth of SL sharks was significantly faster during the first 2 years of life than of the sharks in the NS population. However, in MK, only growth in the first year was considered to be reliably estimated due to low recapture rates. Analyses indicated no significant differences in growth rates between males and females for any area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determining the dimensionality of G provides an important perspective on the genetic basis of a multivariate suite of traits. Since the introduction of Fisher's geometric model, the number of genetically independent traits underlying a set of functionally related phenotypic traits has been recognized as an important factor influencing the response to selection. Here, we show how the effective dimensionality of G can be established, using a method for the determination of the dimensionality of the effect space from a multivariate general linear model introduced by AMEMIYA (1985). We compare this approach with two other available methods, factor-analytic modeling and bootstrapping, using a half-sib experiment that estimated G for eight cuticular hydrocarbons of Drosophila serrata. In our example, eight pheromone traits were shown to be adequately represented by only two underlying genetic dimensions by Amemiya's approach and factor-analytic modeling of the covariance structure at the sire level. In, contrast, bootstrapping identified four dimensions with significant genetic variance. A simulation study indicated that while the performance of Amemiya's method was more sensitive to power constraints, it performed as well or better than factor-analytic modeling in correctly identifying the original genetic dimensions at moderate to high levels of heritability. The bootstrap approach consistently overestimated the number of dimensions in all cases and performed less well than Amemiya's method at subspace recovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional differential scanning calorimetry (DSC) techniques are commonly used to quantify the solubility of drugs within polymeric-controlled delivery systems. However, the nature of the DSC experiment, and in particular the relatively slow heating rates employed, limit its use to the measurement of drug solubility at the drug's melting temperature. Here, we describe the application of hyper-DSC (HDSC), a variant of DSC involving extremely rapid heating rates, to the calculation of the solubility of a model drug, metronidazole, in silicone elastomer, and demonstrate that the faster heating rates permit the solubility to be calculated under non-equilibrium conditions such that the solubility better approximates that at the temperature of use. At a heating rate of 400°C/min (HDSC), metronidazole solubility was calculated to be 2.16 mg/g compared with 6.16 mg/g at 20°C/min. © 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matrix application continues to be a critical step in sample preparation for matrix-assisted laser desorption/ionization (MALDI) mass spectrometry imaging (MSI). Imaging of small molecules such as drugs and metabolites is particularly problematic because the commonly used washing steps to remove salts are usually omitted as they may also remove the analyte, and analyte spreading is more likely with conventional wet matrix application methods. We have developed a method which uses the application of matrix as a dry, finely divided powder, here referred to as dry matrix application, for the imaging of drug compounds. This appears to offer a complementary method to wet matrix application for the MALDI-MSI of small molecules, with the alternative matrix application techniques producing different ion profiles, and allows the visualization of compounds not observed using wet matrix application methods. We demonstrate its value in imaging clozapine from rat kidney and 4-bromophenyl-1,4-diazabicyclo(3.2.2)nonane-4-carboxylic acid from rat brain. In addition, exposure of the dry matrix coated sample to a saturated moist atmosphere appears to enhance the visualization of a different set of molecules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a study of three techniques to improve performance of some standard fore-casting models, application to the energy demand and prices. We focus on forecasting demand and price one-day ahead. First, the wavelet transform was used as a pre-processing procedure with two approaches: multicomponent-forecasts and direct-forecasts. We have empirically compared these approaches and found that the former consistently outperformed the latter. Second, adaptive models were introduced to continuously update model parameters in the testing period by combining ?lters with standard forecasting methods. Among these adaptive models, the adaptive LR-GARCH model was proposed for the fi?rst time in the thesis. Third, with regard to noise distributions of the dependent variables in the forecasting models, we used either Gaussian or Student-t distributions. This thesis proposed a novel algorithm to infer parameters of Student-t noise models. The method is an extension of earlier work for models that are linear in parameters to the non-linear multilayer perceptron. Therefore, the proposed method broadens the range of models that can use a Student-t noise distribution. Because these techniques cannot stand alone, they must be combined with prediction models to improve their performance. We combined these techniques with some standard forecasting models: multilayer perceptron, radial basis functions, linear regression, and linear regression with GARCH. These techniques and forecasting models were applied to two datasets from the UK energy markets: daily electricity demand (which is stationary) and gas forward prices (non-stationary). The results showed that these techniques provided good improvement to prediction performance.