950 resultados para Linear combining


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Twelve samples with different grain sizes were prepared by normal grain growth and by primary recrystallization, and the hysteresis dissipated energy was measured by a quasi-static method. Results showed a linear relation between hysteresis energy loss and the inverse of grain size, which is here called Mager`s law, for maximum inductions from 0.6 to 1.5 T, and a Steinmetz power law relation between hysteresis loss and maximum induction for all samples. The combined effect is better described by a Mager`s law where the coefficients follow Steinmetz law.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New differential linear coherent scattering coefficient, mu(CS), data for four biological tissue types (fat pork, tendon chicken, adipose and fibroglandular human breast tissues) covering a large momentum transfer interval (0.07 <= q <= 70.5 nm(-1)), resulted from combining WAXS and SAXS data, are presented in order to emphasize the need to update the default data-base by including the molecular interference and the large-scale arrangements effect. The results showed that the differential linear coherent scattering coefficient demonstrates influence of the large-scale arrangement, mainly due to collagen fibrils for tendon chicken and fibroglandular breast samples, and triacylglycerides for fat pork and adipose breast samples at low momentum transfer region. While, at high momentum transfer, the mu(CS) reflects effects of molecular interference related to water for tendon chicken and fibroglandular samples and, fatty acids for fat pork and adipose samples. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A switch-mode assisted linear amplifier (SMALA) combining a linear (Class B) and a switch-mode (Class D) amplifier is presented. The usual single hysteretic controlled half-bridge current dumping stage is replaced by two parallel buck converter stages, in a parallel voltage controlled topology. These operate independently: one buck converter sources current to assist the upper Class B output device, and a complementary converter sinks current to assist the lower device. This topology lends itself to a novel control approach of a dead-band at low power levels where neither class D amplifier assists, allowing the class B amplifier to supply the load without interference, ensuring high fidelity. A 20 W implementation demonstrates 85% efficiency, with distortion below 0.08% measured across the full audio bandwidth at 15 W. The class D amplifier begins assisting at 2 W, and below this value, the distortion was below 0.03%. Complete circuitry is given, showing the simplicity of the additional class D amplifier and its corresponding control circuitry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the development and applications of a super-resolution method, known as Super-Resolution Variable-Pixel Linear Reconstruction. The algorithm works combining different lower resolution images in order to obtain, as a result, a higher resolution image. We show that it can make significant spatial resolution improvements to satellite images of the Earth¿s surface allowing recognition of objects with size approaching the limiting spatial resolution of the lower resolution images. The algorithm is based on the Variable-Pixel Linear Reconstruction algorithm developed by Fruchter and Hook, a well-known method in astronomy but never used for Earth remote sensing purposes. The algorithm preserves photometry, can weight input images according to the statistical significance of each pixel, and removes the effect of geometric distortion on both image shape and photometry. In this paper, we describe its development for remote sensing purposes, show the usefulness of the algorithm working with images as different to the astronomical images as the remote sensing ones, and show applications to: 1) a set of simulated multispectral images obtained from a real Quickbird image; and 2) a set of multispectral real Landsat Enhanced Thematic Mapper Plus (ETM+) images. These examples show that the algorithm provides a substantial improvement in limiting spatial resolution for both simulated and real data sets without significantly altering the multispectral content of the input low-resolution images, without amplifying the noise, and with very few artifacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We conduct a large-scale comparative study on linearly combining superparent-one-dependence estimators (SPODEs), a popular family of seminaive Bayesian classifiers. Altogether, 16 model selection and weighing schemes, 58 benchmark data sets, and various statistical tests are employed. This paper's main contributions are threefold. First, it formally presents each scheme's definition, rationale, and time complexity and hence can serve as a comprehensive reference for researchers interested in ensemble learning. Second, it offers bias-variance analysis for each scheme's classification error performance. Third, it identifies effective schemes that meet various needs in practice. This leads to accurate and fast classification algorithms which have an immediate and significant impact on real-world applications. Another important feature of our study is using a variety of statistical tests to evaluate multiple learning methods across multiple data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Obesity is strongly associated with major depressive disorder (MDD) and various other diseases. Genome-wide association studies have identified multiple risk loci robustly associated with body mass index (BMI). In this study, we aimed to investigate whether a genetic risk score (GRS) combining multiple BMI risk loci might have utility in prediction of obesity in patients with MDD. METHODS: Linear and logistic regression models were conducted to predict BMI and obesity, respectively, in three independent large case-control studies of major depression (Radiant, GSK-Munich, PsyCoLaus). The analyses were first performed in the whole sample and then separately in depressed cases and controls. An unweighted GRS was calculated by summation of the number of risk alleles. A weighted GRS was calculated as the sum of risk alleles at each locus multiplied by their effect sizes. Receiver operating characteristic (ROC) analysis was used to compare the discriminatory ability of predictors of obesity. RESULTS: In the discovery phase, a total of 2,521 participants (1,895 depressed patients and 626 controls) were included from the Radiant study. Both unweighted and weighted GRS were highly associated with BMI (P <0.001) but explained only a modest amount of variance. Adding 'traditional' risk factors to GRS significantly improved the predictive ability with the area under the curve (AUC) in the ROC analysis, increasing from 0.58 to 0.66 (95% CI, 0.62-0.68; χ(2) = 27.68; P <0.0001). Although there was no formal evidence of interaction between depression status and GRS, there was further improvement in AUC in the ROC analysis when depression status was added to the model (AUC = 0.71; 95% CI, 0.68-0.73; χ(2) = 28.64; P <0.0001). We further found that the GRS accounted for more variance of BMI in depressed patients than in healthy controls. Again, GRS discriminated obesity better in depressed patients compared to healthy controls. We later replicated these analyses in two independent samples (GSK-Munich and PsyCoLaus) and found similar results. CONCLUSIONS: A GRS proved to be a highly significant predictor of obesity in people with MDD but accounted for only modest amount of variance. Nevertheless, as more risk loci are identified, combining a GRS approach with information on non-genetic risk factors could become a useful strategy in identifying MDD patients at higher risk of developing obesity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A bit-level processing (BLP) based linear CDMA detector is derived following the principle of minimum variance distortionless response (MVDR). The combining taps for the MVDR detector are determined from (1) the covariance matrix of the matched filter output, and (2) the corresponding row (or column) of the user correlation matrix. Due to the interference suppression capability of MVDR and the fact that no inversion of the user correlation matrix is involved, the influence of the synchronisation errors is greatly reduced. The detector performance is demonstrated via computer simulations (both synchronisation errors and intercell interference are considered).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider multistage stochastic linear optimization problems combining joint dynamic probabilistic constraints with hard constraints. We develop a method for projecting decision rules onto hard constraints of wait-and-see type. We establish the relation between the original (in nite dimensional) problem and approximating problems working with projections from di erent subclasses of decision policies. Considering the subclass of linear decision rules and a generalized linear model for the underlying stochastic process with noises that are Gaussian or truncated Gaussian, we show that the value and gradient of the objective and constraint functions of the approximating problems can be computed analytically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on the development and optimization of a modified Quick, Easy, Cheap Effective, Rugged and Safe (QuEChERS) based extraction technique coupled with a clean-up dispersive-solid phase extraction (dSPE) as a new, reliable and powerful strategy to enhance the extraction efficiency of free low molecular-weight polyphenols in selected species of dietary vegetables. The process involves two simple steps. First, the homogenized samples are extracted and partitioned using an organic solvent and salt solution. Then, the supernatant is further extracted and cleaned using a dSPE technique. Final clear extracts of vegetables were concentrated under vacuum to near dryness and taken up into initial mobile phase (0.1% formic acid and 20% methanol). The separation and quantification of free low molecular weight polyphenols from the vegetable extracts was achieved by ultrahigh pressure liquid chromatography (UHPLC) equipped with a phodiode array (PDA) detection system and a Trifunctional High Strength Silica capillary analytical column (HSS T3), specially designed for polar compounds. The performance of the method was assessed by studying the selectivity, linear dynamic range, the limit of detection (LOD) and limit of quantification (LOQ), precision, trueness, and matrix effects. The validation parameters of the method showed satisfactory figures of merit. Good linearity (View the MathML sourceRvalues2>0.954; (+)-catechin in carrot samples) was achieved at the studied concentration range. Reproducibility was better than 3%. Consistent recoveries of polyphenols ranging from 78.4 to 99.9% were observed when all target vegetable samples were spiked at two concentration levels, with relative standard deviations (RSDs, n = 5) lower than 2.9%. The LODs and the LOQs ranged from 0.005 μg mL−1 (trans-resveratrol, carrot) to 0.62 μg mL−1 (syringic acid, garlic) and from 0.016 μg mL−1 (trans-resveratrol, carrot) to 0.87 μg mL−1 ((+)-catechin, carrot) depending on the compound. The method was applied for studying the occurrence of free low molecular weight polyphenols in eight selected dietary vegetables (broccoli, tomato, carrot, garlic, onion, red pepper, green pepper and beetroot), providing a valuable and promising tool for food quality evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, short term hydroelectric scheduling is formulated as a network flow optimization model and solved by interior point methods. The primal-dual and predictor-corrector versions of such interior point methods are developed and the resulting matrix structure is explored. This structure leads to very fast iterations since it avoids computation and factorization of impedance matrices. For each time interval, the linear algebra reduces to the solution of two linear systems, either to the number of buses or to the number of independent loops. Either matrix is invariant and can be factored off-line. As a consequence of such matrix manipulations, a linear system which changes at each iteration has to be solved, although its size is reduced to the number of generating units and is not a function of time intervals. These methods were applied to IEEE and Brazilian power systems, and numerical results were obtained using a MATLAB implementation. Both interior point methods proved to be robust and achieved fast convergence for all instances tested. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.