966 resultados para Maximum entropy methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are currently many devices and techniques to quantify trace elements (TEs) in various matrices, but their efficacy is dependent on the digestion methods (DMs) employed in the opening of such matrices which, although "organic", present inorganic components which are difficult to solubilize. This study was carried out to evaluate the recovery of Fe, Zn, Cr, Ni, Cd and Pb contents in samples of composts and cattle, horse, chicken, quail, and swine manures, as well as in sewage sludges and peat. The DMs employed were acid digestion in microwaves with HNO3 (EPA 3051A); nitric-perchloric digestion with HNO3 + HClO4 in a digestion block (NP); dry ashing in a muffle furnace and solubilization of residual ash in nitric acid (MDA); digestion by using aqua regia solution (HCl:HNO3) in the digestion block (AR); and acid digestion with HCl and HNO3 + H2O2 (EPA 3050). The dry ashing method led to the greatest recovery of Cd in organic residues, but the EPA 3050 protocol can be an alternative method for the same purpose. The dry ashing should not be employed to determine the concentration of Cr, Fe, Ni, Pb and Zn in the residues. Higher Cr and Fe contents are recovered when NP and EPA 3050 are employed in the opening of organic matrices. For most of the residues analyzed, AR is the most effective method for recovering Ni. Microwave-assisted digestion methods (EPA3051 and 3050) led to the highest recovery of Pb. The choice of the DM that provides maximum recovery of Zn depends on the organic residue and trace element analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In May 2010, Switzerland introduced a heterogeneous smoking ban in the hospitality sector. While the law leaves room for exceptions in some cantons, it is comprehensive in others. This longitudinal study uses different measurement methods to examine airborne nicotine levels in hospitality venues and the level of personal exposure of non-smoking hospitality workers before and after implementation of the law. METHODS: Personal exposure to second hand smoke (SHS) was measured by three different methods. We compared a passive sampler called MoNIC (Monitor of NICotine) badge, to salivary cotinine and nicotine concentration as well as questionnaire data. Badges allowed the number of passively smoked cigarettes to be estimated. They were placed at the venues as well as distributed to the participants for personal measurements. To assess personal exposure at work, a time-weighted average of the workplace badge measurements was calculated. RESULTS: Prior to the ban, smoke-exposed hospitality venues yielded a mean badge value of 4.48 (95%-CI: 3.7 to 5.25; n = 214) cigarette equivalents/day. At follow-up, measurements in venues that had implemented a smoking ban significantly declined to an average of 0.31 (0.17 to 0.45; n = 37) (p = 0.001). Personal badge measurements also significantly decreased from an average of 2.18 (1.31-3.05 n = 53) to 0.25 (0.13-0.36; n = 41) (p = 0.001). Spearman rank correlations between badge exposure measures and salivary measures were small to moderate (0.3 at maximum). CONCLUSIONS: Nicotine levels significantly decreased in all types of hospitality venues after implementation of the smoking ban. In-depth analyses demonstrated that a time-weighted average of the workplace badge measurements represented typical personal SHS exposure at work more reliably than personal exposure measures such as salivary cotinine and nicotine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Isothermal magnetization curves up to 23 T have been measured in Gd5Si1.8Ge2.2. We show that the values of the entropy change at the first-order magnetostructural transition, obtained from the Clausius-Clapeyron equation and the Maxwell relation, are coincident, provided the Maxwell relation is evaluated only within the transition region and the maximum applied field is high enough to complete the transition. These values are also in agreement with the entropy change obtained from differential scanning calorimetry. We also show that a simple phenomenological model based on the temperature and field dependence of the magnetization accounts for these results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with the derivation of new estimators and performance bounds for the problem of timing estimation of (linearly) digitally modulated signals. The conditional maximum likelihood (CML) method is adopted, in contrast to the classical low-SNR unconditional ML (UML) formulationthat is systematically applied in the literature for the derivationof non-data-aided (NDA) timing-error-detectors (TEDs). A new CML TED is derived and proved to be self-noise free, in contrast to the conventional low-SNR-UML TED. In addition, the paper provides a derivation of the conditional Cramér–Rao Bound (CRB ), which is higher (less optimistic) than the modified CRB (MCRB)[which is only reached by decision-directed (DD) methods]. It is shown that the CRB is a lower bound on the asymptotic statisticalaccuracy of the set of consistent estimators that are quadratic with respect to the received signal. Although the obtained boundis not general, it applies to most NDA synchronizers proposed in the literature. A closed-form expression of the conditional CRBis obtained, and numerical results confirm that the CML TED attains the new bound for moderate to high Eg/No.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Construction of multiple sequence alignments is a fundamental task in Bioinformatics. Multiple sequence alignments are used as a prerequisite in many Bioinformatics methods, and subsequently the quality of such methods can be critically dependent on the quality of the alignment. However, automatic construction of a multiple sequence alignment for a set of remotely related sequences does not always provide biologically relevant alignments.Therefore, there is a need for an objective approach for evaluating the quality of automatically aligned sequences. The profile hidden Markov model is a powerful approach in comparative genomics. In the profile hidden Markov model, the symbol probabilities are estimated at each conserved alignment position. This can increase the dimension of parameter space and cause an overfitting problem. These two research problems are both related to conservation. We have developed statistical measures for quantifying the conservation of multiple sequence alignments. Two types of methods are considered, those identifying conserved residues in an alignment position, and those calculating positional conservation scores. The positional conservation score was exploited in a statistical prediction model for assessing the quality of multiple sequence alignments. The residue conservation score was used as part of the emission probability estimation method proposed for profile hidden Markov models. The results of the predicted alignment quality score highly correlated with the correct alignment quality scores, indicating that our method is reliable for assessing the quality of any multiple sequence alignment. The comparison of the emission probability estimation method with the maximum likelihood method showed that the number of estimated parameters in the model was dramatically decreased, while the same level of accuracy was maintained. To conclude, we have shown that conservation can be successfully used in the statistical model for alignment quality assessment and in the estimation of emission probabilities in the profile hidden Markov models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a comparison between three analytical methods developed for the simultaneous determination of eight quinolones regulated by the European Union (marbofloxacin, ciprofloxacin, danofloxacin, enrofloxacin, difloxacin, sarafloxacin, oxolinic acid and flumequine) in pig muscle, using liquid chromatography with fluorescence detection (LC-FD), liquid chromatography-mass spectrometry (LC-MS) and liquid chromatography-tandem mass spectrometry (LC-MS/MS). The procedures involve an extraction of the quinolones from the tissues, a step for clean-up and preconcentration of the analytes by solid-phase extraction and a subsequent liquid chromatographic analysis. The limits of detection of the methods ranged from 0.1 to 2.1 ng g−1 using LC-FD, from 0.3 to 1.8 using LC-MS and from 0.2 to 0.3 using LC-MS/MS, while inter- and intra-day variability was under 15 % in all cases. Most of those data are notably lower than the maximum residue limits established by the European Union for quinolones in pig tissues. The methods have been applied for the determination of quinolones in six different commercial pig muscle samples purchased in different supermarkets located in the city of Granada (south-east Spain).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rural electrification is characterized by geographical dispersion of the population, low consumption, high investment by consumers and high cost. Moreover, solar radiation constitutes an inexhaustible source of energy and in its conversion into electricity photovoltaic panels are used. In this study, equations were adjusted to field conditions presented by the manufacturer for current and power of small photovoltaic systems. The mathematical analysis was performed on the photovoltaic rural system I-100 from ISOFOTON, with power 300 Wp, located at the Experimental Farm Lageado of FCA/UNESP. For the development of such equations, the circuitry of photovoltaic cells has been studied to apply iterative numerical methods for the determination of electrical parameters and possible errors in the appropriate equations in the literature to reality. Therefore, a simulation of a photovoltaic panel was proposed through mathematical equations that were adjusted according to the data of local radiation. The results have presented equations that provide real answers to the user and may assist in the design of these systems, once calculated that the maximum power limit ensures a supply of energy generated. This real sizing helps establishing the possible applications of solar energy to the rural producer and informing the real possibilities of generating electricity from the sun.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study compares the precision of three image classification methods, two of remote sensing and one of geostatistics applied to areas cultivated with citrus. The 5,296.52ha area of study is located in the city of Araraquara - central region of the state of São Paulo (SP), Brazil. The multispectral image from the CCD/CBERS-2B satellite was acquired in 2009 and processed through the Geographic Information System (GIS) SPRING. Three classification methods were used, one unsupervised (Cluster), and two supervised (Indicator Kriging/IK and Maximum Likelihood/Maxver), in addition to the screen classification taken as field checking.. Reliability of classifications was evaluated by Kappa index. In accordance with the Kappa index, the Indicator kriging method obtained the highest degree of reliability for bands 2 and 4. Moreover the Cluster method applied to band 2 (green) was the best quality classification between all the methods. Indicator Kriging was the classifier that presented the citrus total area closest to the field check estimated by -3.01%, whereas Maxver overestimated the total citrus area by 42.94%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the lack of information concerning maximum rainfall equations for most locations in Mato Grosso do Sul State, the alternative for carrying out hydraulic work projects has been information from meteorological stations closest to the location in which the project is carried out. Alternative methods, such as 24 hours rain disaggregation method from rainfall data due to greater availability of stations and longer observations can work. Based on this approach, the objective of this study was to estimate maximum rainfall equations for Mato Grosso do Sul State by adjusting the 24 hours rain disaggregation method, depending on data obtained from rain gauge stations from Dourado and Campo Grande. For this purpose, data consisting of 105 rainfall stations were used, which are available in the ANA (Water Resources Management National Agency) database. Based on the results we concluded: the intense rainfall equations obtained by pluviogram analysis showed determination coefficient above 99%; and the performance of 24 hours rain disaggregation method was classified as excellent, based on relative average error WILMOTT concordance index (1982).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, feature selection in classification based problems is highlighted. The role of feature selection methods is to select important features by discarding redundant and irrelevant features in the data set, we investigated this case by using fuzzy entropy measures. We developed fuzzy entropy based feature selection method using Yu's similarity and test this using similarity classifier. As the similarity classifier we used Yu's similarity, we tested our similarity on the real world data set which is dermatological data set. By performing feature selection based on fuzzy entropy measures before classification on our data set the empirical results were very promising, the highest classification accuracy of 98.83% was achieved when testing our similarity measure to the data set. The achieved results were then compared with some other results previously obtained using different similarity classifiers, the obtained results show better accuracy than the one achieved before. The used methods helped to reduce the dimensionality of the used data set, to speed up the computation time of a learning algorithm and therefore have simplified the classification task

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A field experiment was conducted for two consecutive years to study the effect of fertilizer application methods and inter and intra-row weed-crop competition durations on density and biomass of different weeds and growth, grain yield and yield components of maize. The experimental treatments comprised of two fertilizer application methods (side placement and below seed placement) and inter and intra-row weed-crop competition durations each for 15, 30, 45, and 60 days after emergence, as well as through the crop growing period. Fertilizer application method didn't affect weed density, biomass, and grain yield of maize. Below seed fertilizer placement generally resulted in less mean weed dry weight and more crop leaf area index, growth rate, grain weight per cob and 1000 grain weight. Minimum number of weeds and dry weight were recorded in inter-row or intra-row weed-crop competition for 15 DAE. Number of cobs per plant, grain weight per cob, 1000 grain weight and grain yield decreased with an increase in both inter-row and intra-row weed-crop competition durations. Maximum mean grain yield of 6.35 and 6.33 tha-1 were recorded in inter-row and intra-row weed competition for 15 DAE, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Baroreflex sensitivity was studied in the same group of conscious rats using vasoactive drugs (phenylephrine and sodium nitroprusside) administered by three different approaches: 1) bolus injection, 2) steady-state (blood pressure (BP) changes produced in steps), 3) ramp infusion (30 s, brief infusion). The heart rate (HR) responses were evaluated by the mean index (mean ratio of all HR changes and mean arterial pressure (MAP) changes), by linear regression and by the logistic method (maximum gain of the sigmoid curve by a logistic function). The experiments were performed on three consecutive days. Basal MAP and resting HR were similar on all days of the study. Bradycardic responses evaluated by the mean index (-1.5 ± 0.2, -2.1 ± 0.2 and -1.6 ± 0.2 bpm/mmHg) and linear regression (-1.8 ± 0.3, -1.4 ± 0.3 and -1.7 ± 0.2 bpm/mmHg) were similar for all three approaches used to change blood pressure. The tachycardic responses to decreases of MAP were similar when evaluated by linear regression (-3.9 ± 0.8, -2.1 ± 0.7 and -3.8 ± 0.4 bpm/mmHg). However, the tachycardic mean index (-3.1 ± 0.4, -6.6 ± 1 and -3.6 ± 0.5 bpm/mmHg) was higher when assessed by the steady-state method. The average gain evaluated by logistic function (-3.5 ± 0.6, -7.6 ± 1.3 and -3.8 ± 0.4 bpm/mmHg) was similar to the reflex tachycardic values, but different from the bradycardic values. Since different ways to change BP may alter the afferent baroreceptor function, the MAP changes obtained during short periods of time (up to 30 s: bolus and ramp infusion) are more appropriate to prevent the acute resetting. Assessment of the baroreflex sensitivity by mean index and linear regression permits a separate analysis of gain for reflex bradycardia and reflex tachycardia. Although two values of baroreflex sensitivity cannot be evaluated by a single symmetric logistic function, this method has the advantage of better comparing the baroreflex sensitivity of animals with different basal blood pressures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work was to compare the performance of isotope-selective non-dispersive infrared spectrometry (IRIS) for the 13C-urea breath test with the combination of the 14C-urea breath test (14C-UBT), urease test and histologic examination for the diagnosis of H. pylori (HP) infection. Fifty-three duodenal ulcer patients were studied. All patients were submitted to gastroscopy to detect HP by the urease test, histologic examination and 14C-UBT. To be included in the study the results of the 3 tests had to be concordant. Within one month after admission to the study the patients were submitted to IRIS with breath samples collected before and 30 min after the ingestion of 75 mg 13C-urea dissolved in 200 ml of orange juice. The samples were mailed and analyzed 11.5 (4-21) days after collection. Data were analyzed statistically by the chi-square and Mann-Whitney test and by the Spearman correlation coefficient. Twenty-six patients were HP positive and 27 negative. There was 100% agreement between the IRIS results and the HP status determined by the other three methods. Using a cutoff value of delta-over-baseline (DOB) above 4.0 the IRIS showed a mean value of 19.38 (minimum = 4.2, maximum = 41.3, SD = 10.9) for HP-positive patients and a mean value of 0.88 (minimum = 0.10, maximum = 2.5, SD = 0.71) for negative patients. Using a cutoff value corresponding to 0.800% CO2/weight (kg), the 14C-UBT showed a mean value of 2.78 (minimum = 0.89, maximum = 5.22, SD = 1.18) in HP-positive patients. HP-negative patients showed a mean value of 0.37 (minimum = 0.13, maximum = 0.77, SD = 0.17). IRIS is a low-cost, easy to manage, highly sensitive and specific test for H. pylori detection. Storing and mailing the samples did not interfere with the performance of the test.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.