942 resultados para Explicit method, Mean square stability, Stochastic orthogonal Runge-Kutta, Chebyshev method
Resumo:
This dissertation aims to improve the performance of existing assignment-based dynamic origin-destination (O-D) matrix estimation models to successfully apply Intelligent Transportation Systems (ITS) strategies for the purposes of traffic congestion relief and dynamic traffic assignment (DTA) in transportation network modeling. The methodology framework has two advantages over the existing assignment-based dynamic O-D matrix estimation models. First, it combines an initial O-D estimation model into the estimation process to provide a high confidence level of initial input for the dynamic O-D estimation model, which has the potential to improve the final estimation results and reduce the associated computation time. Second, the proposed methodology framework can automatically convert traffic volume deviation to traffic density deviation in the objective function under congested traffic conditions. Traffic density is a better indicator for traffic demand than traffic volume under congested traffic condition, thus the conversion can contribute to improving the estimation performance. The proposed method indicates a better performance than a typical assignment-based estimation model (Zhou et al., 2003) in several case studies. In the case study for I-95 in Miami-Dade County, Florida, the proposed method produces a good result in seven iterations, with a root mean square percentage error (RMSPE) of 0.010 for traffic volume and a RMSPE of 0.283 for speed. In contrast, Zhou's model requires 50 iterations to obtain a RMSPE of 0.023 for volume and a RMSPE of 0.285 for speed. In the case study for Jacksonville, Florida, the proposed method reaches a convergent solution in 16 iterations with a RMSPE of 0.045 for volume and a RMSPE of 0.110 for speed, while Zhou's model needs 10 iterations to obtain the best solution, with a RMSPE of 0.168 for volume and a RMSPE of 0.179 for speed. The successful application of the proposed methodology framework to real road networks demonstrates its ability to provide results both with satisfactory accuracy and within a reasonable time, thus establishing its potential usefulness to support dynamic traffic assignment modeling, ITS systems, and other strategies.
Resumo:
This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.
Resumo:
Interferometric synthetic aperture radar (InSAR) techniques can successfully detect phase variations related to the water level changes in wetlands and produce spatially detailed high-resolution maps of water level changes. Despite the vast details, the usefulness of the wetland InSAR observations is rather limited, because hydrologists and water resources managers need information on absolute water level values and not on relative water level changes. We present an InSAR technique called Small Temporal Baseline Subset (STBAS) for monitoring absolute water level time series using radar interferograms acquired successively over wetlands. The method uses stage (water level) observation for calibrating the relative InSAR observations and tying them to the stage's vertical datum. We tested the STBAS technique with two-year long Radarsat-1 data acquired during 2006–2008 over the Water Conservation Area 1 (WCA1) in the Everglades wetlands, south Florida (USA). The InSAR-derived water level data were calibrated using 13 stage stations located in the study area to generate 28 successive high spatial resolution maps (50 m pixel resolution) of absolute water levels. We evaluate the quality of the STBAS technique using a root mean square error (RMSE) criterion of the difference between InSAR observations and stage measurements. The average RMSE is 6.6 cm, which provides an uncertainty estimation of the STBAS technique to monitor absolute water levels. About half of the uncertainties are attributed to the accuracy of the InSAR technique to detect relative water levels. The other half reflects uncertainties derived from tying the relative levels to the stage stations' datum.
Resumo:
In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency's safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.
Resumo:
We present an improved database of planktonic foraminiferal census counts from the Southern Hemisphere Oceans (SHO) from 15°S to 64°S. The SHO database combines 3 existing databases. Using this SHO database, we investigated dissolution biases that might affect faunal census counts. We suggest a depth/[DCO3]2- threshold of ~3800 m/[DCO3]2- = ~-10 to -5 µmol/kg for the Pacific and Indian Oceans, and ~4000 m/[DCO3]2- = ~0 to 10 µmol/kg for the Atlantic Ocean, under which core-top assemblages can be affected by dissolution and are less reliable for paleo-sea surface temperature (SST) reconstructions. We removed all core-tops beyond these thresholds from the SHO database. This database has 598 core-tops and is able to reconstruct past SST variations from 2° to 25.5°C, with a root mean square error of 1.00°C, for annual temperatures. To inspect dissolution affects SST reconstruction quality, we tested the data base with two "leave-one-out" tests, with and without the deep core-tops. We used this database to reconstruct Summer SST (SSST) over the last 20 ka, using the Modern Analog Technique method, on the Southeast Pacific core MD07-3100. This was compared to the SSST reconstructed using the 3 databases used to compile the SHO database. Thus showing that the reconstruction using the SHO database is more reliable, as its dissimilarity values are the lowest. The most important aspect here is the importance of a bias-free, geographic-rich, database. We leave this dataset open-ended to future additions; the new core-tops must be carefully selected, with their chronological frameworks, and evidence of dissolution assessed.
Resumo:
The modern industrial progress has been contaminating water with phenolic compounds. These are toxic and carcinogenic substances and it is essential to reduce its concentration in water to a tolerable one, determined by CONAMA, in order to protect the living organisms. In this context, this work focuses on the treatment and characterization of catalysts derived from the bio-coal, by-product of biomass pyrolysis (avelós and wood dust) as well as its evaluation in the phenol photocatalytic degradation reaction. Assays were carried out in a slurry bed reactor, which enables instantaneous measurements of temperature, pH and dissolved oxygen. The experiments were performed in the following operating conditions: temperature of 50 °C, oxygen flow equals to 410 mL min-1 , volume of reagent solution equals to 3.2 L, 400 W UV lamp, at 1 atm pressure, with a 2 hours run. The parameters evaluated were the pH (3.0, 6.9 and 10.7), initial concentration of commercial phenol (250, 500 and 1000 ppm), catalyst concentration (0, 1, 2, and 3 g L-1 ), nature of the catalyst (activated avelós carbon washed with dichloromethane, CAADCM, and CMADCM, activated dust wood carbon washed with dichloromethane). The results of XRF, XRD and BET confirmed the presence of iron and potassium in satisfactory amounts to the CAADCM catalyst and on a reduced amount to CMADCM catalyst, and also the surface area increase of the materials after a chemical and physical activation. The phenol degradation curves indicate that pH has a significant effect on the phenol conversion, showing better results for lowers pH. The optimum concentration of catalyst is observed equals to 1 g L-1 , and the increase of the initial phenol concentration exerts a negative influence in the reaction execution. It was also observed positive effect of the presence of iron and potassium in the catalyst structure: betters conversions were observed for tests conducted with the catalyst CAADCM compared to CMADCM catalyst under the same conditions. The higher conversion was achieved for the test carried out at acid pH (3.0) with an initial concentration of phenol at 250 ppm catalyst in the presence of CAADCM at 1 g L-1 . The liquid samples taken every 15 minutes were analyzed by liquid chromatography identifying and quantifying hydroquinone, p-benzoquinone, catechol and maleic acid. Finally, a reaction mechanism is proposed, cogitating the phenol is transformed into the homogeneous phase and the others react on the catalyst surface. Applying the model of Langmuir-Hinshelwood along with a mass balance it was obtained a system of differential equations that were solved using the Runge-Kutta 4th order method associated with a optimization routine called SWARM (particle swarm) aiming to minimize the least square objective function for obtaining the kinetic and adsorption parameters. Related to the kinetic rate constant, it was obtained a magnitude of 10-3 for the phenol degradation, 10-4 to 10-2 for forming the acids, 10-6 to 10-9 for the mineralization of quinones (hydroquinone, p-benzoquinone and catechol), 10-3 to 10-2 for the mineralization of acids.
Resumo:
This thesis begins by studying the thickness of evaporative spin coated colloidal crystals and demonstrates the variation of the thickness as a function of suspension concentration and spin rate. Particularly, the films are thicker with higher suspension concentration and lower spin rate. This study also provides evidence for the reproducibility of spin coating in terms of the thickness of the resulting colloidal films. These colloidal films, as well as the ones obtained from various other methods such as convective assembly and dip coating, usually possess a crystalline structure. Due to the lack of a comprehensive method for characterization of order in colloidal structures, a procedure is developed for such a characterization in terms of local and longer range translational and orientational order. Translational measures turn out to be adequate for characterizing small deviations from perfect order, while orientational measures are more informative for polycrystalline and highly disordered crystals. Finally, to obtain an understanding of the relationship between dynamics and structure, the dynamics of colloids in a quasi-2D suspension as a function of packing fraction is studied. The tools that are used are mean square displacement (MSD) and the self part of the van Hove function. The slow down of dynamics is observed as the packing fraction increases, accompanied with the emergence of 6-fold symmetry within the system. The dynamics turns out to be non-Gaussian at early times and Gaussian at later times for packing fractions below 0.6. Above this packing fraction, the dynamics is non-Gaussian at all times. Also the diffusion coefficient is calculated from MSD and the van Hove function. It goes down as the packing fraction is increased.
Resumo:
In this thesis, research for tsunami remote sensing using the Global Navigation Satellite System-Reflectometry (GNSS-R) delay-Doppler maps (DDMs) is presented. Firstly, a process for simulating GNSS-R DDMs of a tsunami-dominated sea sur- face is described. In this method, the bistatic scattering Zavorotny-Voronovich (Z-V) model, the sea surface mean square slope model of Cox and Munk, and the tsunami- induced wind perturbation model are employed. The feasibility of the Cox and Munk model under a tsunami scenario is examined by comparing the Cox and Munk model- based scattering coefficient with the Jason-1 measurement. A good consistency be- tween these two results is obtained with a correlation coefficient of 0.93. After con- firming the applicability of the Cox and Munk model for a tsunami-dominated sea, this work provides the simulations of the scattering coefficient distribution and the corresponding DDMs of a fixed region of interest before and during the tsunami. Fur- thermore, by subtracting the simulation results that are free of tsunami from those with presence of tsunami, the tsunami-induced variations in scattering coefficients and DDMs can be clearly observed. Secondly, a scheme to detect tsunamis and estimate tsunami parameters from such tsunami-dominant sea surface DDMs is developed. As a first step, a procedure to de- termine tsunami-induced sea surface height anomalies (SSHAs) from DDMs is demon- strated and a tsunami detection precept is proposed. Subsequently, the tsunami parameters (wave amplitude, direction and speed of propagation, wavelength, and the tsunami source location) are estimated based upon the detected tsunami-induced SSHAs. In application, the sea surface scattering coefficients are unambiguously re- trieved by employing the spatial integration approach (SIA) and the dual-antenna technique. Next, the effective wind speed distribution can be restored from the scat- tering coefficients. Assuming all DDMs are of a tsunami-dominated sea surface, the tsunami-induced SSHAs can be derived with the knowledge of background wind speed distribution. In addition, the SSHA distribution resulting from the tsunami-free DDM (which is supposed to be zero) is considered as an error map introduced during the overall retrieving stage and is utilized to mitigate such errors from influencing sub- sequent SSHA results. In particular, a tsunami detection procedure is conducted to judge the SSHAs to be truly tsunami-induced or not through a fitting process, which makes it possible to decrease the false alarm. After this step, tsunami parameter estimation is proceeded based upon the fitted results in the former tsunami detec- tion procedure. Moreover, an additional method is proposed for estimating tsunami propagation velocity and is believed to be more desirable in real-world scenarios. The above-mentioned tsunami-dominated sea surface DDM simulation, tsunami detection precept and parameter estimation have been tested with simulated data based on the 2004 Sumatra-Andaman tsunami event.
Resumo:
Based on the quantitative analysis of diatom assemblages preserved in 274 surface sediment samples recovered in the Pacific, Atlantic and western Indian sectors of the Southern Ocean we have defined a new reference database for quantitative estimation of late-middle Pleistocene Antarctic sea ice fields using the transfer function technique. The Detrended Canonical Analysis (DCA) of the diatom data set points to a unimodal distribution of the diatom assemblages. Canonical Correspondence Analysis (CCA) indicates that winter sea ice (WSI) but also summer sea surface temperature (SSST) represent the most prominent environmental variables that control the spatial species distribution. To test the applicability of transfer functions for sea ice reconstruction in terms of concentration and occurrence probability we applied four different methods, the Imbrie and Kipp Method (IKM), the Modern Analog Technique (MAT), Weighted Averaging (WA), and Weighted Averaging Partial Least Squares (WAPLS), using logarithm-transformed diatom data and satellite-derived (1981-2010) sea ice data as a reference. The best performance for IKM results was obtained using a subset of 172 samples with 28 diatom taxa/taxa groups, quadratic regression and a three-factor model (IKM-D172/28/3q) resulting in root mean square errors of prediction (RMSEP) of 7.27% and 11.4% for WSI and summer sea ice (SSI) concentration, respectively. MAT estimates were calculated with different numbers of analogs (4, 6) using a 274-sample/28-taxa reference data set (MAT-D274/28/4an, -6an) resulting in RMSEP's ranging from 5.52% (4an) to 5.91% (6an) for WSI as well as 8.93% (4an) to 9.05% (6an) for SSI. WA and WAPLS performed less well with the D274 data set, compared to MAT, achieving WSI concentration RMSEP's of 9.91% with WA and 11.29% with WAPLS, recommending the use of IKM and MAT. The application of IKM and MAT to surface sediment data revealed strong relations to the satellite-derived winter and summer sea ice field. Sea ice reconstructions performed on an Atlantic- and a Pacific Southern Ocean sediment core, both documenting sea ice variability over the past 150,000 years (MIS 1 - MIS 6), resulted in similar glacial/interglacial trends of IKM and MAT-based sea-ice estimates. On the average, however, IKM estimates display smaller WSI and slightly higher SSI concentration and probability at lower variability in comparison with MAT. This pattern is a result of different estimation techniques with integration of WSI and SSI signals in one single factor assemblage by applying IKM and selecting specific single samples, thus keeping close to the original diatom database and included variability, by MAT. In contrast to the estimation of WSI, reconstructions of past SSI variability remains weaker. Combined with diatom-based estimates, the abundance and flux pattern of biogenic opal represents an additional indication for the WSI and SSI extent.
Resumo:
Biodiesel is a renewable fuel derived from vegetable oils or animal fats, which can be a total or partial substitute for diesel. Since 2005, this fuel was introduced in the Brazilian energy matrix through Law 11.097 that determines the percentage of biodiesel added to diesel oil as well as monitoring the insertion of this fuel in market. The National Agency of Petroleum, Natural Gas and Biofuels (ANP) establish the obligation of adding 7% (v/v) of biodiesel to diesel commercialized in the country, making crucial the analytical control of this content. Therefore, in this study were developed and validated methodologies based on the use of Mid Infrared Spectroscopy (MIR) and Multivariate Calibration by Partial Least Squares (PLS) to quantify the methyl and ethyl biodiesels content of cotton and jatropha in binary blends with diesel at concentration range from 1.00 to 30.00% (v/v), since this is the range specified in standard ABNT NBR 15568. The biodiesels were produced from two routes, using ethanol or methanol, and evaluated according to the parameters: oxidative stability, water content, kinematic viscosity and density, presenting results according to ANP Resolution No. 45/2014. The built PLS models were validated on the basis of ASTM E1655-05 for Infrared Spectroscopy and Multivariate Calibration and ABNT NBR 15568, with satisfactory results due to RMSEP (Root Mean Square Error of Prediction) values below 0.08% (<0.1%), correlation coefficients (R) above 0.9997 and the absence of systematic error (bias). Therefore, the methodologies developed can be a promising alternative in the quality control of this fuel.
Resumo:
Magnetic field inhomogeneity results in image artifacts including signal loss, image blurring and distortions, leading to decreased diagnostic accuracy. Conventional multi-coil (MC) shimming method employs both RF coils and shimming coils, whose mutual interference induces a tradeoff between RF signal-to-noise (SNR) ratio and shimming performance. To address this issue, RF coils were integrated with direct-current (DC) shim coils to shim field inhomogeneity while concurrently emitting and receiving RF signal without being blocked by the shim coils. The currents applied to the new coils, termed iPRES (integrated parallel reception, excitation and shimming), were optimized in the numerical simulation to improve the shimming performance. The objectives of this work is to offer a guideline for designing the optimal iPRES coil arrays to shim the abdomen.
In this thesis work, the main field () inhomogeneity was evaluated by root mean square error (RMSE). To investigate the shimming abilities of iPRES coil arrays, a set of the human abdomen MRI data was collected for the numerical simulations. Thereafter, different simplified iPRES(N) coil arrays were numerically modeled, including a 1-channel iPRES coil and 8-channel iPRES coil arrays. For 8-channel iPRES coil arrays, each RF coil was split into smaller DC loops in the x, y and z direction to provide extra shimming freedom. Additionally, the number of DC loops in a RF coil was increased from 1 to 5 to find the optimal divisions in z direction. Furthermore, switches were numerically implemented into iPRES coils to reduce the number of power supplies while still providing similar shimming performance with equivalent iPRES coil arrays.
The optimizations demonstrate that the shimming ability of an iPRES coil array increases with number of DC loops per RF coil. Furthermore, the z direction divisions tend to be more effective in reducing field inhomogeneity than the x and y divisions. Moreover, the shimming performance of an iPRES coil array gradually reach to a saturation level when the number of DC loops per RF coil is large enough. Finally, when switches were numerically implemented in the iPRES(4) coil array, the number of power supplies can be reduced from 32 to 8 while keeping the shimming performance similar to iPRES(3) and better than iPRES(1). This thesis work offers a guidance for the designs of iPRES coil arrays.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Résumé : En raison de sa grande étendue, le Nord canadien présente plusieurs défis logistiques pour une exploitation rentable de ses ressources minérales. La TéléCartographie Prédictive (TCP) vise à faciliter la localisation de gisements en produisant des cartes du potentiel géologique. Des données altimétriques sont nécessaires pour générer ces cartes. Or, celles actuellement disponibles au nord du 60e parallèle ne sont pas optimales principalement parce qu’elles sont dérivés de courbes à équidistance variable et avec une valeur au mètre. Parallèlement, il est essentiel de connaître l'exactitude verticale des données altimétriques pour être en mesure de les utiliser adéquatement, en considérant les contraintes liées à son exactitude. Le projet présenté vise à aborder ces deux problématiques afin d'améliorer la qualité des données altimétriques et contribuer à raffiner la cartographie prédictive réalisée par TCP dans le Nord canadien, pour une zone d’étude située au Territoire du Nord-Ouest. Le premier objectif était de produire des points de contrôles permettant une évaluation précise de l'exactitude verticale des données altimétriques. Le second objectif était de produire un modèle altimétrique amélioré pour la zone d'étude. Le mémoire présente d'abord une méthode de filtrage pour des données Global Land and Surface Altimetry Data (GLA14) de la mission ICESat (Ice, Cloud and land Elevation SATellite). Le filtrage est basé sur l'application d'une série d'indicateurs calculés à partir d’informations disponibles dans les données GLA14 et des conditions du terrain. Ces indicateurs permettent d'éliminer les points d'élévation potentiellement contaminés. Les points sont donc filtrés en fonction de la qualité de l’attitude calculée, de la saturation du signal, du bruit d'équipement, des conditions atmosphériques, de la pente et du nombre d'échos. Ensuite, le document décrit une méthode de production de Modèles Numériques de Surfaces (MNS) améliorés, par stéréoradargrammétrie (SRG) avec Radarsat-2 (RS-2). La première partie de la méthodologie adoptée consiste à faire la stéréorestitution des MNS à partir de paires d'images RS-2, sans point de contrôle. L'exactitude des MNS préliminaires ainsi produits est calculée à partir des points de contrôles issus du filtrage des données GLA14 et analysée en fonction des combinaisons d’angles d'incidences utilisées pour la stéréorestitution. Ensuite, des sélections de MNS préliminaires sont assemblées afin de produire 5 MNS couvrant chacun la zone d'étude en totalité. Ces MNS sont analysés afin d'identifier la sélection optimale pour la zone d'intérêt. Les indicateurs sélectionnés pour la méthode de filtrage ont pu être validés comme performant et complémentaires, à l’exception de l’indicateur basé sur le ratio signal/bruit puisqu’il était redondant avec l’indicateur basé sur le gain. Autrement, chaque indicateur a permis de filtrer des points de manière exclusive. La méthode de filtrage a permis de réduire de 19% l'erreur quadratique moyenne sur l'élévation, lorsque que comparée aux Données d'Élévation Numérique du Canada (DNEC). Malgré un taux de rejet de 69% suite au filtrage, la densité initiale des données GLA14 a permis de conserver une distribution spatiale homogène. À partir des 136 MNS préliminaires analysés, aucune combinaison d’angles d’incidences des images RS-2 acquises n’a pu être identifiée comme étant idéale pour la SRG, en raison de la grande variabilité des exactitudes verticales. Par contre, l'analyse a indiqué que les images devraient idéalement être acquises à des températures en dessous de 0°C, pour minimiser les disparités radiométriques entre les scènes. Les résultats ont aussi confirmé que la pente est le principal facteur d’influence sur l’exactitude de MNS produits par SRG. La meilleure exactitude verticale, soit 4 m, a été atteinte par l’assemblage de configurations de même direction de visées. Par contre, les configurations de visées opposées, en plus de produire une exactitude du même ordre (5 m), ont permis de réduire le nombre d’images utilisées de 30%, par rapport au nombre d'images acquises initialement. Par conséquent, l'utilisation d'images de visées opposées pourrait permettre d’augmenter l’efficacité de réalisation de projets de SRG en diminuant la période d’acquisition. Les données altimétriques produites pourraient à leur tour contribuer à améliorer les résultats de la TCP, et augmenter la performance de l’industrie minière canadienne et finalement, améliorer la qualité de vie des citoyens du Nord du Canada.
Resumo:
Os oceanos representam um dos maiores recursos naturais, possuindo expressivo potencial energético, podendo suprir parte da demanda energética mundial. Nas últimas décadas, alguns dispositivos destinados à conversão da energia das ondas dos oceanos em energia elétrica têm sido estudados. No presente trabalho, o princípio de funcionamento do conversor do tipo Coluna de Água Oscilante, do inglês Oscillating Water Colum, (OWC) foi analisado numericamente. As ondas incidentes na câmara hidro-pneumática da OWC, causam um movimento alternado da coluna de água no interior da câmara, o qual produz um fluxo alternado de ar que passa pela chaminé. O ar passa e aciona uma turbina a qual transmite energia para um gerador elétrico. O objetivo do presente estudo foi investigar a influência de diferentes formas geométricas da câmara sobre o fluxo resultante de ar que passa pela turbina, que influencia no desempenho do dispositivo. Para isso, geometrias diferentes para o conversor foram analisadas empregando modelos computacionais 2D e 3D. Um modelo computacional desenvolvido nos softwares GAMBIT e FLUENT foi utilizado, em que o conversor OWC foi acoplado a um tanque de ondas. O método Volume of Fluid (VOF) e a teoria de 2ª ordem Stokes foram utilizados para gerar ondas regulares, permitindo uma interação mais realista entre o conversor, água, ar e OWC. O Método dos Volumes Finitos (MVF) foi utilizado para a discretização das equações governantes. Neste trabalho o Contructal Design (baseado na Teoria Constructal) foi aplicado pela primeira vez em estudos numéricos tridimensionais de OWC para fim de encontrar uma geometria que mais favorece o desempenho do dispositivo. A função objetivo foi a maximização da vazão mássica de ar que passa através da chaminé do dispositivo OWC, analisado através do método mínimos quadrados, do inglês Root Mean Square (RMS). Os resultados indicaram que a forma geométrica da câmara influencia na transformação da energia das ondas em energia elétrica. As geometrias das câmaras analisadas que apresentaram maior área da face de incidência das ondas (sendo altura constante), apresentaram também maior desempenho do conversor OWC. A melhor geometria, entre os casos desse estudo, ofereceu um ganho no desempenho do dispositivo em torno de 30% maior.
Resumo:
Considering the social and economic importance that the milk has, the objective of this study was to evaluate the incidence and quantifying antimicrobial residues in the food. The samples were collected in dairy industry of southwestern Paraná state and thus they were able to cover all ten municipalities in the region of Pato Branco. The work focused on the development of appropriate models for the identification and quantification of analytes: tetracycline, sulfamethazine, sulfadimethoxine, chloramphenicol and ampicillin, all antimicrobials with health interest. For the calibration procedure and validation of the models was used the Infrared Spectroscopy Fourier Transform associated with chemometric method based on Partial Least Squares regression (PLS - Partial Least Squares). To prepare a work solution antimicrobials, the five analytes of interest were used in increasing doses, namely tetracycline from 0 to 0.60 ppm, sulfamethazine 0 to 0.12 ppm, sulfadimethoxine 0 to 2.40 ppm chloramphenicol 0 1.20 ppm and ampicillin 0 to 1.80 ppm to perform the work with the interest in multiresidues analysis. The performance of the models constructed was evaluated through the figures of merit: mean square error of calibration and cross-validation, correlation coefficients and offset performance ratio. For the purposes of applicability in this work, it is considered that the models generated for Tetracycline, Sulfadimethoxine and Chloramphenicol were considered viable, with the greatest predictive power and efficiency, then were employed to evaluate the quality of raw milk from the region of Pato Branco . Among the analyzed samples by NIR, 70% were in conformity with sanitary legislation, and 5% of these samples had concentrations below the Maximum Residue permitted, and is also satisfactory. However 30% of the sample set showed unsatisfactory results when evaluating the contamination with antimicrobials residues, which is non conformity related to the presence of antimicrobial unauthorized use or concentrations above the permitted limits. With the development of this work can be said that laboratory tests in the food area, using infrared spectroscopy with multivariate calibration was also good, fast in analysis, reduced costs and with minimum generation of laboratory waste. Thus, the alternative method proposed meets the quality concerns and desired efficiency by industrial sectors and society in general.