26 resultados para Simulation-based methods
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
O objetivo do presente estudo foi avaliar a prevalência de ingestão inadequada de nutrientes em um grupo de adolescentes de São Bernardo do Campo-SP. Dados de consumo de energia e nutrientes foram obtidos por meio de recordatórios de 24 horas aplicados em 89 adolescentes. A prevalência de inadequação foi calculada utilizando o método EAR como ponto de corte, após ajuste pela variabilidade intrapessoal, utilizando o procedimento desenvolvido pela Iowa State University. As Referências de Ingestão Dietética (IDR) foram os valores de referência para ingestão. Para os nutrientes que não possuem EAR estabelecida, a distribuição do consumo foi comparada com a AI. As maiores prevalências de inadequação em ambos sexos foram observadas para o magnésio (99,3 por cento para o sexo masculino e 81,8 por cento para o feminino), zinco (44,0 por cento para o sexo masculino e 23,5 por cento para o feminino), vitamina C (57,2 por cento para o sexo masculino e 59,9 por cento para o feminino) e folato (34,8 por cento para o sexo feminino). A proporção de indivíduos com ingestão superior à AI foi insignificante (menor que 2,0 por cento) em ambos os sexos
Resumo:
The purpose of this paper is to propose a multiobjective optimization approach for solving the manufacturing cell formation problem, explicitly considering the performance of this said manufacturing system. Cells are formed so as to simultaneously minimize three conflicting objectives, namely, the level of the work-in-process, the intercell moves and the total machinery investment. A genetic algorithm performs a search in the design space, in order to approximate to the Pareto optimal set. The values of the objectives for each candidate solution in a population are assigned by running a discrete-event simulation, in which the model is automatically generated according to the number of machines and their distribution among cells implied by a particular solution. The potential of this approach is evaluated via its application to an illustrative example, and a case from the relevant literature. The obtained results are analyzed and reviewed. Therefore, it is concluded that this approach is capable of generating a set of alternative manufacturing cell configurations considering the optimization of multiple performance measures, greatly improving the decision making process involved in planning and designing cellular systems. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
Here, we examine morphological changes in cortical thickness of patients with Alzheimer`s disease (AD) using image analysis algorithms for brain structure segmentation and study automatic classification of AD patients using cortical and volumetric data. Cortical thickness of AD patients (n = 14) was measured using MRI cortical surface-based analysis and compared with healthy subjects (n = 20). Data was analyzed using an automated algorithm for tissue segmentation and classification. A Support Vector Machine (SVM) was applied over the volumetric measurements of subcortical and cortical structures to separate AD patients from controls. The group analysis showed cortical thickness reduction in the superior temporal lobe, parahippocampal gyrus, and enthorhinal cortex in both hemispheres. We also found cortical thinning in the isthmus of cingulate gyrus and middle temporal gyrus at the right hemisphere, as well as a reduction of the cortical mantle in areas previously shown to be associated with AD. We also confirmed that automatic classification algorithms (SVM) could be helpful to distinguish AD patients from healthy controls. Moreover, the same areas implicated in the pathogenesis of AD were the main parameters driving the classification algorithm. While the patient sample used in this study was relatively small, we expect that using a database of regional volumes derived from MRI scans of a large number of subjects will increase the SVM power of AD patient identification.
Resumo:
OBJECTIVE: To estimate the spatial intensity of urban violence events using wavelet-based methods and emergency room data. METHODS: Information on victims attended at the emergency room of a public hospital in the city of São Paulo, Southeastern Brazil, from January 1, 2002 to January 11, 2003 were obtained from hospital records. The spatial distribution of 3,540 events was recorded and a uniform random procedure was used to allocate records with incomplete addresses. Point processes and wavelet analysis technique were used to estimate the spatial intensity, defined as the expected number of events by unit area. RESULTS: Of all georeferenced points, 59% were accidents and 40% were assaults. There is a non-homogeneous spatial distribution of the events with high concentration in two districts and three large avenues in the southern area of the city of São Paulo. CONCLUSIONS: Hospital records combined with methodological tools to estimate intensity of events are useful to study urban violence. The wavelet analysis is useful in the computation of the expected number of events and their respective confidence bands for any sub-region and, consequently, in the specification of risk estimates that could be used in decision-making processes for public policies.
Resumo:
The approach presented in this paper consists of an energy-based field-circuit coupling in combination with multi-physics simulation of the acoustic radiation of electrical machines. The proposed method is applied to a special switched reluctance motor with asymmetric pole geometry to improve the start-up torque. The pole shape has been optimized, subject to low torque ripple, in a previous study. The proposed approach here is used to analyze the impact of the optimization on the overall acoustic behavior. The field-circuit coupling is based on a temporary lumped-parameter model of the magnetic part incorporated into a circuit simulation based on the modified nodal analysis. The harmonic force excitation is calculated by means of stress tensor computation, and it is transformed to a mechanical mesh by mapping techniques. The structural dynamic problem is solved in the frequency domain using a finite-element modal analysis and superposition. The radiation characteristic is obtained from boundary element acoustic simulation. Simulation results of both rotor types are compared, and measurements of the drive are presented.
Resumo:
The correlation between the microdilution (MD), Etest (R) (ET), and disk diffusion (DD) methods was determined for amphotericin B, itraconazole and fluconazole. The minimal inhibitory concentration (MIC) of those antifungal agents was established for a total of 70 Candida spp. isolates from colonization and infection. The species distribution was: Candida albicans (n = 27), C. tropicalis (n = 17), C. glabrata (n = 16), C. parapsilosis (n = 8), and C. lusitaniae (n = 2). Non-Candida albicans Candida species showed higher MICs for the three antifungal agents when compared with C. albicans isolates. The overall concordance (based on the MIC value obtained within two dilutions) between the ET and the MD method was 83% for amphotericin B, 63% for itraconazole, and 64% for fluconazole. Considering the breakpoint, the agreement between the DD and MD methods was 71% for itraconazole and 67% for fluconazole. The DD zone diameters are highly reproducible and correlate well with the MD method, making agar-based methods a viable alternative to MD for susceptibility testing. However, data on agar-based tests for itraconazole and amphotericin B are yet scarce. Thus, further research must still be carded out to ensure the standardization to other antifungal agents. J. Clin. Lab. Anal. 23:324-330, 2009. (C) 2009 Wiley-Liss, Inc.
Resumo:
The final contents of total and individual trans-fatty acids of sunflower oil, produced during the deacidification step of physical refining were obtained using a computational simulation program that considered cis-trans isomerization reaction features for oleic, linoleic, and linolenic acids attached to the glycerol part of triacylglycerols. The impact of process variables, such as temperature and liquid flow rate, and of equipment configuration parameters, such as liquid height, diameter, and number of stages, that influence the retention time of the oil in the equipment was analyzed using the response-surface methodology (RSM). The computational simulation and the RSM results were used in two different optimization methods, aiming to minimize final levels of total and individual trans-fatty acids (trans-FA), while keeping neutral oil loss and final oil acidity at low values. The main goal of this work was to indicate that computational simulation, based on a careful modeling of the reaction system, combined with optimization could be an important tool for indicating better processing conditions in industrial physical refining plants of vegetable oils, concerning trans-FA formation.
Resumo:
The concentrations of the water-soluble inorganic aerosol species, ammonium (NH4+), nitrate (NO3-), chloride (Cl-), and sulfate (SO42-), were measured from September to November 2002 at a pasture site in the Amazon Basin (Rondnia, Brazil) (LBA-SMOCC). Measurements were conducted using a semi-continuous technique (Wet-annular denuder/Steam-Jet Aerosol Collector: WAD/SJAC) and three integrating filter-based methods, namely (1) a denuder-filter pack (DFP: Teflon and impregnated Whatman filters), (2) a stacked-filter unit (SFU: polycarbonate filters), and (3) a High Volume dichotomous sampler (HiVol: quartz fiber filters). Measurements covered the late dry season (biomass burning), a transition period, and the onset of the wet season (clean conditions). Analyses of the particles collected on filters were performed using ion chromatography (IC) and Particle-Induced X-ray Emission spectrometry (PIXE). Season-dependent discrepancies were observed between the WAD/SJAC system and the filter-based samplers. During the dry season, when PM2.5 (D-p <= 2.5 mu m) concentrations were similar to 100 mu g m(-3), aerosol NH4+ and SO42- measured by the filter-based samplers were on average two times higher than those determined by the WAD/SJAC. Concentrations of aerosol NO3- and Cl- measured with the HiVol during daytime, and with the DFP during day- and nighttime also exceeded those of the WAD/SJAC by a factor of two. In contrast, aerosol NO3- and Cl- measured with the SFU during the dry season were nearly two times lower than those measured by the WAD/SJAC. These differences declined markedly during the transition period and towards the cleaner conditions during the onset of the wet season (PM2.5 similar to 5 mu g m(-3)); when filter-based samplers measured on average 40-90% less than the WAD/SJAC. The differences were not due to consistent systematic biases of the analytical techniques, but were apparently a result of prevailing environmental conditions and different sampling procedures. For the transition period and wet season, the significance of our results is reduced by a low number of data points. We argue that the observed differences are mainly attributable to (a) positive and negative filter sampling artifacts, (b) presence of organic compounds and organosulfates on filter substrates, and (c) a SJAC sampling efficiency of less than 100%.
Resumo:
The logic of proofs (lp) was proposed as Gdels missed link between Intuitionistic and S4-proofs, but so far the tableau-based methods proposed for lp have not explored this closeness with S4 and contain rules whose analycity is not immediately evident. We study possible formulations of analytic tableau proof methods for lp that preserve the subformula property. Two sound and complete tableau decision methods of increasing degree of analycity are proposed, KELP and preKELP. The latter is particularly inspired on S4-proofs. The crucial role of proof constants in the structure of lp-proofs methods is analysed. In particular, a method for the abduction of proof constant specifications in strongly analytic preKELP proofs is presented; abduction heuristics and the complexity of the method are discussed.
Resumo:
The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance. but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Steady-state and time-resolved fluorescence measurements are reported for several crude oils and their saturates, aromatics, resins, and asphaltenes (SARA) fractions (saturates, aromatics and resins), isolated from maltene after pentane precipitation of the asphaltenes. There is a clear relationship between the American Petroleum Institute (API) grade of the crude oils and their fluorescence emission intensity and maxima. Dilution of the crude oil samples with cyclohexane results in a significant increase of emission intensity and a blue shift, which is a clear indication of the presence of energy-transfer processes between the emissive chromophores present in the crude oil. Both the fluorescence spectra and the mean fluorescence lifetimes of the three SARA fractions and their mixtures indicate that the aromatics and resins are the major contributors to the emission of crude oils. Total synchronous fluorescence scan (TSFS) spectral maps are preferable to steady-state fluorescence spectra for discriminating between the fractions, making TSFS maps a particularly interesting choice for the development of fluorescence-based methods for the characterization and classification of crude oils. More detailed studies, using a much wider range of excitation and emission wavelengths, are necessary to determine the utility of time-resolved fluorescence (TRF) data for this purpose. Preliminary models constructed using TSFS spectra from 21 crude oil samples show a very good correlation (R(2) > 0.88) between the calculated and measured values of API and the SARA fraction concentrations. The use of models based on a fast fluorescence measurement may thus be an alternative to tedious and time-consuming chemical analysis in refineries.
Resumo:
The leaf area index (LAI) of fast-growing Eucalyptus plantations is highly dynamic both seasonally and interannually, and is spatially variable depending on pedo-climatic conditions. LAI is very important in determining the carbon and water balance of a stand, but is difficult to measure during a complete stand rotation and at large scales. Remote-sensing methods allowing the retrieval of LAI time series with accuracy and precision are therefore necessary. Here, we tested two methods for LAI estimation from MODIS 250m resolution red and near-infrared (NIR) reflectance time series. The first method involved the inversion of a coupled model of leaf reflectance and transmittance (PROSPECT4), soil reflectance (SOILSPECT) and canopy radiative transfer (4SAIL2). Model parameters other than the LAI were either fixed to measured constant values, or allowed to vary seasonally and/or with stand age according to trends observed in field measurements. The LAI was assumed to vary throughout the rotation following a series of alternately increasing and decreasing sigmoid curves. The parameters of each sigmoid curve that allowed the best fit of simulated canopy reflectance to MODIS red and NIR reflectance data were obtained by minimization techniques. The second method was based on a linear relationship between the LAI and values of the GEneralized Soil Adjusted Vegetation Index (GESAVI), which was calibrated using destructive LAI measurements made at two seasons, on Eucalyptus stands of different ages and productivity levels. The ability of each approach to reproduce field-measured LAI values was assessed, and uncertainty on results and parameter sensitivities were examined. Both methods offered a good fit between measured and estimated LAI (R(2) = 0.80 and R(2) = 0.62 for model inversion and GESAVI-based methods, respectively), but the GESAVI-based method overestimated the LAI at young ages. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The increase in biodiversity from high to low latitudes is a widely recognized biogeographical pattern. According to the latitudinal gradient hypothesis (LGH), this pattern was shaped by differential effects of Late Quaternary climatic changes across a latitudinal gradient. Here, we evaluate the effects of climatic changes across a tropical latitudinal gradient and its implications to diversification of an Atlantic Forest (AF) endemic passerine. We studied the intraspecific diversification and historical demography of Sclerurus scansor, based on mitochondrial (ND2, ND3 and cytb) and nuclear (FIB7) gene sequences. Phylogenetic analyses recovered three well-supported clades associated with distinct latitudinal zones. Coalescent-based methods were applied to estimate divergence times and changes in effective population sizes. Estimates of divergence times indicate that intraspecific diversification took place during Middle-Late Pleistocene. Distinct demographic scenarios were identified, with the southern lineage exhibiting a clear signature of demographic expansion, while the central one remained more stable. The northern lineage, contrasting with LGH predictions, exhibited a clear sign of a recent bottleneck. Our results suggest that different AF regions reacted distinctly, even in opposite ways, under the same climatic period, producing simultaneously favourable scenarios for isolation and contact among populations.
Resumo:
Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009