901 resultados para Filter-rectify-filter-model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health effects resulting from dust inhalation in occupational environments may be more strongly associated with specific microbial components, such as fungi, than to the particles. The aim of the present study is to characterize the occupational exposure to the fungal burden in four different occupational settings (two feed industries, one poultry and one waste sorting industry), presenting results from two air sampling methods – the impinger collector and the use of filters. In addition, the equipment used for the filter sampling method allowed a more accurate characterization regarding the dimension of the collected fungal particles (less than 2.5 μm size). Air samples of 300L were collected using the impinger Coriolis μ air sampler. Simultaneously, the aerosol monitor (DustTrak II model 8532, TSI®) allowed assessing viable microbiological material below the 2.5 μm size. After sampling, filters were immersed in 300 mL of sterilized distilled water and agitated for 30 min at 100 rpm. 150 μl from the sterilized distilled water were subsequently spread onto malt extract agar (2%) with chloramphenicol (0.05 g/L). All plates were incubated at 27.5 ºC during 5–7 days. With the impinger method, the fungal load ranged from 0 to 413 CFU.m-3 and with the filter method, ranged from 0 to 64 CFU.m-3. In one feed industry, Penicillium genus was the most frequently found genus (66.7%) using the impinger method and three more fungi species/genera/complex were found. The filter assay allowed the detection of only two species/genera/complex in the same industry. In the other feed industry, Cladosporium sp. was the most found (33.3%) with impinger method and three more species/genera/complex were also found. Through the filter assay four fungi species/genera/complex were found. In the assessed poultry, Rhyzopus sp. was the most frequently detected (61.2%) and more three species/genera/complex were isolated. Through the filter assay, only two fungal species/genera/complex were found. In the waste sorting industry Penicillium sp. was the most prevalent (73.6%) with the impinger method, being isolated two more different fungi species/genera/complex. Through the filter assay only Penicillium sp. was found. A more precise determination of occupational fungal exposure was ensured, since it was possible to obtain information regarding not only the characterization of fungal contamination (impinger method), but also the size of dust particles, and viable fungal particles, that can reach the worker ́s respiratory tract (filters method). Both methods should be used in parallel to enrich discussion regarding potential health effects of occupational exposure to fungi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years the photovoltaic generation has had greater insertion in the energy mix of the most developed countries, growing at annual rates of over 30%. The pressure for the reduction of pollutant emissions, diversification of the energy mix and the drop in prices are the main factors driving this growth. Grid tied systems plays an important role in alleviating the energy crisis and diversification of energy sources. Among the grid tied systems, building integrated photovoltaic systems suffers from partial shading of the photovoltaic modules and consequently the energy yield is reduced. In such cases, classical forms of modules connection do not produce good results and new techniques have been developed to increase the amount of energy produced by a set of modules. In the parallel connection technique of photovoltaic modules, a high voltage gain DC-DC converter is required, which is relatively complex to build with high efficiency. The current-fed isolated converters explored in this work have some desirable characteristics for this type of application, such as: low input current ripple and input voltage ripple, high voltage gain, galvanic isolation, feature high power capacity and it achieve soft switching in a wide operating range. This study presents contributions to the study of a high gain and high efficiency DC-DC converter for use in a parallel system of photovoltaic generation, being possible the use in a microinverter or with central inverter. The main contributions of this work are: analysis of the active clamping circuit operation proposing that the clamp capacitor connection must be done on the negative node of the power supply to reduce the input current ripple and thus reduce the filter requirements; use of a voltage doubler in the output rectifier to reduce the number of components and to extend the gain of the converter; detailed study of the converter components in order to raise the efficiency; obtaining the AC equivalent model and control system design. As a result, a DC-DC converter with high gain, high efficiency and without electrolytic capacitors in the power stage was developed. In the final part of this work the DC-DC converter operation connected to an inverter is presented. Besides, the DC bus controller is designed and are implemented two maximum power point tracking algorithms. Experimental results of full system operation connected to an emulator and subsequently to a real photovoltaic module are also given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to increasing integration density and operating frequency of today's high performance processors, the temperature of a typical chip can easily exceed 100 degrees Celsius. However, the runtime thermal state of a chip is very hard to predict and manage due to the random nature in computing workloads, as well as the process, voltage and ambient temperature variability (together called PVT variability). The uneven nature (both in time and space) of the heat dissipation of the chip could lead to severe reliability issues and error-prone chip behavior (e.g. timing errors). Many dynamic power/thermal management techniques have been proposed to address this issue such as dynamic voltage and frequency scaling (DVFS), clock gating and etc. However, most of such techniques require accurate knowledge of the runtime thermal state of the chip to make efficient and effective control decisions. In this work we address the problem of tracking and managing the temperature of microprocessors which include the following sub-problems: (1) how to design an efficient sensor-based thermal tracking system on a given design that could provide accurate real-time temperature feedback; (2) what statistical techniques could be used to estimate the full-chip thermal profile based on very limited (and possibly noise-corrupted) sensor observations; (3) how do we adapt to changes in the underlying system's behavior, since such changes could impact the accuracy of our thermal estimation. The thermal tracking methodology proposed in this work is enabled by on-chip sensors which are already implemented in many modern processors. We first investigate the underlying relationship between heat distribution and power consumption, then we introduce an accurate thermal model for the chip system. Based on this model, we characterize the temperature correlation that exists among different chip modules and explore statistical approaches (such as those based on Kalman filter) that could utilize such correlation to estimate the accurate chip-level thermal profiles in real time. Such estimation is performed based on limited sensor information because sensors are usually resource constrained and noise-corrupted. We also took a further step to extend the standard Kalman filter approach to account for (1) nonlinear effects such as leakage-temperature interdependency and (2) varying statistical characteristics in the underlying system model. The proposed thermal tracking infrastructure and estimation algorithms could consistently generate accurate thermal estimates even when the system is switching among workloads that have very distinct characteristics. Through experiments, our approaches have demonstrated promising results with much higher accuracy compared to existing approaches. Such results can be used to ensure thermal reliability and improve the effectiveness of dynamic thermal management techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Theoretical and Experimental Tomography in the Sea Experiment (THETIS 1) took place in the Gulf of Lion to observe the evolution of the temperature field and the process of deep convection during the 1991-1992 winter. The temperature measurements consist, of moored sensors, conductivity-temperature-depth and expendable bathythermograph surveys, ana acoustic tomography. Because of this diverse data set and since the field evolves rather fast, the analysis uses a unified framework, based on estimation theory and implementing a Kalman filter. The resolution and the errors associated with the model are systematically estimated. Temperature is a good tracer of water masses. The time-evolving three-dimensional view of the field resulting from the analysis shows the details of the three classical convection phases: preconditioning, vigourous convection, and relaxation. In all phases, there is strong spatial nonuniformity, with mesoscale activity, short timescales, and sporadic evidence of advective events (surface capping, intrusions of Levantine Intermediate Water (LIW)). Deep convection, reaching 1500 m, was observed in late February; by late April the field had not yet returned to its initial conditions (strong deficit of LIW). Comparison with available atmospheric flux data shows that advection acts to delay the occurence of convection and confirms the essential role of buoyancy fluxes. For this winter, the deep. mixing results in an injection of anomalously warm water (Delta T similar or equal to 0.03 degrees) to a depth of 1500 m, compatible with the deep warming previously reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this study is to provide a framework for future researchers to understand and use the FARSITE wildfire-forecasting model with data assimilation. Current wildfire models lack the ability to provide accurate prediction of fire front position faster than real-time. When FARSITE is coupled with a recursive ensemble filter, the data assimilation forecast method improves. The scope includes an explanation of the standalone FARSITE application, technical details on FARSITE integration with a parallel program coupler called OpenPALM, and a model demonstration of the FARSITE-Ensemble Kalman Filter software using the FireFlux I experiment by Craig Clements. The results show that the fire front forecast is improved with the proposed data-driven methodology than with the standalone FARSITE model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Biológicas, Departamento de Biologia Molecular, 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Waterflooding is a technique largely applied in the oil industry. The injected water displaces oil to the producer wells and avoid reservoir pressure decline. However, suspended particles in the injected water may cause plugging of pore throats causing formation damage (permeability reduction) and injectivity decline during waterflooding. When injectivity decline occurs it is necessary to increase the injection pressure in order to maintain water flow injection. Therefore, a reliable prediction of injectivity decline is essential in waterflooding projects. In this dissertation, a simulator based on the traditional porous medium filtration model (including deep bed filtration and external filter cake formation) was developed and applied to predict injectivity decline in perforated wells (this prediction was made from history data). Experimental modeling and injectivity decline in open-hole wells is also discussed. The injectivity of modeling showed good agreement with field data, which can be used to support plan stimulation injection wells

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discrepancies between classical model predictions and experimental data for deep bed filtration have been reported by various authors. In order to understand these discrepancies, an analytic continuum model for deep bed filtration is proposed. In this model, a filter coefficient is attributed to each distinct retention mechanism (straining, diffusion, gravity interception, etc.). It was shown that these coefficients generally cannot be merged into an effective filter coefficient, as considered in the classical model. Furthermore, the derived analytic solutions for the proposed model were applied for fitting experimental data, and a very good agreement between experimental data and proposed model predictions were obtained. Comparison of the obtained results with empirical correlations allowed identifying the dominant retention mechanisms. In addition, it was shown that the larger the ratio of particle to pore sizes, the more intensive the straining mechanism and the larger the discrepancies between experimental data and classical model predictions. The classical model and proposed model were compared via statistical analysis. The obtained p values allow concluding that the proposed model should be preferred especially when straining plays an important role. In addition, deep bed filtration with finite retention capacity was studied. This work also involves the study of filtration of particles through porous media with a finite capacity of filtration. It was observed, in this case, that is necessary to consider changes in the boundary conditions through time evolution. It was obtained a solution for such a model using different functions of filtration coefficients. Besides that, it was shown how to build a solution for any filtration coefficient. It was seen that, even considering the same filtration coefficient, the classic model and the one here propposed, show different predictions for the concentration of particles retained in the porous media and for the suspended particles at the exit of the media

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this report, we develop an intelligent adaptive neuro-fuzzy controller by using adaptive neuro fuzzy inference system (ANFIS) techniques. We begin by starting with a standard proportional-derivative (PD) controller and use the PD controller data to train the ANFIS system to develop a fuzzy controller. We then propose and validate a method to implement this control strategy on commercial off-the-shelf (COTS) hardware. An analysis is made into the choice of filters for attitude estimation. These choices are limited by the complexity of the filter and the computing ability and memory constraints of the micro-controller. Simplified Kalman filters are found to be good at estimation of attitude given the above constraints. Using model based design techniques, the models are implemented on an embedded system. This enables the deployment of fuzzy controllers on enthusiast-grade controllers. We evaluate the feasibility of the proposed control strategy in a model-in-the-loop simulation. We then propose a rapid prototyping strategy, allowing us to deploy these control algorithms on a system consisting of a combination of an ARM-based microcontroller and two Arduino-based controllers. We then use a combination of the code generation capabilities within MATLAB/Simulink in combination with multiple open-source projects in order to deploy code to an ARM CortexM4 based controller board. We also evaluate this strategy on an ARM-A8 based board, and a much less powerful Arduino based flight controller. We conclude by proving the feasibility of fuzzy controllers on Commercial-off the shelf (COTS) hardware, we also point out the limitations in the current hardware and make suggestions for hardware that we think would be better suited for memory heavy controllers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement and modeling techniques were developed to improve over-water gaseous air-water exchange measurements for persistent bioaccumulative and toxic chemicals (PBTs). Analytical methods were applied to atmospheric measurements of hexachlorobenzene (HCB), polychlorinated biphenyls (PCBs), and polybrominated diphenyl ethers (PBDEs). Additionally, the sampling and analytical methods are well suited to study semivolatile organic compounds (SOCs) in air with applications related to secondary organic aerosol formation, urban, and indoor air quality. A novel gas-phase cleanup method is described for use with thermal desorption methods for analysis of atmospheric SOCs using multicapillary denuders. The cleanup selectively removed hydrogen-bonding chemicals from samples, including much of the background matrix of oxidized organic compounds in ambient air, and thereby improved precision and method detection limits for nonpolar analytes. A model is presented that predicts gas collection efficiency and particle collection artifact for SOCs in multicapillary denuders using polydimethylsiloxane (PDMS) sorbent. An approach is presented to estimate the equilibrium PDMS-gas partition coefficient (Kpdms) from an Abraham solvation parameter model for any SOC. A high flow rate (300 L min-1) multicapillary denuder was designed for measurement of trace atmospheric SOCs. Overall method precision and detection limits were determined using field duplicates and compared to the conventional high-volume sampler method. The high-flow denuder is an alternative to high-volume or passive samplers when separation of gas and particle-associated SOCs upstream of a filter and short sample collection time are advantageous. A Lagrangian internal boundary layer transport exchange (IBLTE) Model is described. The model predicts the near-surface variation in several quantities with fetch in coastal, offshore flow: 1) modification in potential temperature and gas mixing ratio, 2) surface fluxes of sensible heat, water vapor, and trace gases using the NOAA COARE Bulk Algorithm and Gas Transfer Model, 3) vertical gradients in potential temperature and mixing ratio. The model was applied to interpret micrometeorological measurements of air-water exchange flux of HCB and several PCB congeners in Lake Superior. The IBLTE Model can be applied to any scalar, including water vapor, carbon dioxide, dimethyl sulfide, and other scalar quantities of interest with respect to hydrology, climate, and ecosystem science.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decade advances and innovations from Silicon Photonics technology were observed in the telecommunications and computing industries. This technology which employs Silicon as an optical medium, relies on current CMOS micro-electronics fabrication processes to enable medium scale integration of many nano-photonic devices to produce photonic integrated circuitry. However, other fields of research such as optical sensor processing can benefit from silicon photonics technology, specially in sensors where the physical measurement is wavelength encoded. In this research work, we present a design and application of a thermally tuned silicon photonic device as an optical sensor interrogator. The main device is a micro-ring resonator filter of 10 $\mu m$ of diameter. A photonic design toolkit was developed based on open source software from the research community. With those tools it was possible to estimate the resonance and spectral characteristics of the filter. From the obtained design parameters, a 7.8 x 3.8 mm optical chip was fabricated using standard micro-photonics techniques. In order to tune a ring resonance, Nichrome micro-heaters were fabricated on top of the device. Some fabricated devices were systematically characterized and their tuning response were determined. From measurements, a ring resonator with a free-spectral-range of 18.4 nm and with a bandwidth of 0.14 nm was obtained. Using just 5 mA it was possible to tune the device resonance up to 3 nm. In order to apply our device as a sensor interrogator in this research, a model of wavelength estimation using time interval between peaks measurement technique was developed and simulations were carried out to assess its performance. To test the technique, an experiment using a Fiber Bragg grating optical sensor was set, and estimations of the wavelength shift of this sensor due to axial strains yield an error within 22 pm compared to measurements from spectrum analyzer. Results from this study implies that signals from FBG sensors can be processed with good accuracy using a micro-ring device with the advantage of ts compact size, scalability and versatility. Additionally, the system also has additional applications such as processing optical wavelength shifts from integrated photonic sensors and to be able to track resonances from laser sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the presented paper, the temporal and statistical properties of a Lyot filter based multiwavelength random DFB fiber laser with a wide flat spectrum, consisting of individual lines, were investigated. It was shown that separate spectral lines forming the laser spectrum have mostly Gaussian statistics and so represent stochastic radiation, but at the same time the entire radiation is not fully stochastic. A simple model, taking into account phenomenological correlations of the lines' initial phases was established. Radiation structure in the experiment and simulation proved to be different, demanding interactions between different lines to be described via a NLSE-based model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A subfilter-scale (SFS) stress model is developed for large-eddy simulations (LES) and is tested on various benchmark problems in both wall-resolved and wall-modelled LES. The basic ingredients of the proposed model are the model length-scale, and the model parameter. The model length-scale is defined as a fraction of the integral scale of the flow, decoupled from the grid. The portion of the resolved scales (LES resolution) appears as a user-defined model parameter, an advantage that the user decides the LES resolution. The model parameter is determined based on a measure of LES resolution, the SFS activity. The user decides a value for the SFS activity (based on the affordable computational budget and expected accuracy), and the model parameter is calculated dynamically. Depending on how the SFS activity is enforced, two SFS models are proposed. In one approach the user assigns the global (volume averaged) contribution of SFS to the transport (global model), while in the second model (local model), SFS activity is decided locally (locally averaged). The models are tested on isotropic turbulence, channel flow, backward-facing step and separating boundary layer. In wall-resolved LES, both global and local models perform quite accurately. Due to their near-wall behaviour, they result in accurate prediction of the flow on coarse grids. The backward-facing step also highlights the advantage of decoupling the model length-scale from the mesh. Despite the sharply refined grid near the step, the proposed SFS models yield a smooth, while physically consistent filter-width distribution, which minimizes errors when grid discontinuity is present. Finally the model application is extended to wall-modelled LES and is tested on channel flow and separating boundary layer. Given the coarse resolution used in wall-modelled LES, near the wall most of the eddies become SFS and SFS activity is required to be locally increased. The results are in very good agreement with the data for the channel. Errors in the prediction of separation and reattachment are observed in the separated flow, that are somewhat improved with some modifications to the wall-layer model.