853 resultados para heterogeneous regressions algorithms
Resumo:
The effect of different heterogeneous catalysts on the microwave-assisted transesterification of sunflower oil for the production of methylic biodiesel in a monomode microwave reactor is described. The experiments were carried out at 70 ºC with a 16:1 methanolsunflower oil molar ratio and different heterogeneous basic and acidic catalysts. The results showed that the microwave-heated reactions occur up to four times faster than those carried out with conventional heating. The reactions were performed with 24 catalysts; pure calcium oxide (CaO) and potassium carbonate, either pure or supported by alumina (K2CO3/Al2O3), were the most efficient catalysts.
Resumo:
Among the challenges of pig farming in today's competitive market, there is factor of the product traceability that ensures, among many points, animal welfare. Vocalization is a valuable tool to identify situations of stress in pigs, and it can be used in welfare records for traceability. The objective of this work was to identify stress in piglets using vocalization, calling this stress on three levels: no stress, moderate stress, and acute stress. An experiment was conducted on a commercial farm in the municipality of Holambra, São Paulo State , where vocalizations of twenty piglets were recorded during the castration procedure, and separated into two groups: without anesthesia and local anesthesia with lidocaine base. For the recording of acoustic signals, a unidirectional microphone was connected to a digital recorder, in which signals were digitized at a frequency of 44,100 Hz. For evaluation of sound signals, Praat® software was used, and different data mining algorithms were applied using Weka® software. The selection of attributes improved model accuracy, and the best attribute selection was used by applying Wrapper method, while the best classification algorithms were the k-NN and Naive Bayes. According to the results, it was possible to classify the level of stress in pigs through their vocalization.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
Global illumination algorithms are at the center of realistic image synthesis and account for non-trivial light transport and occlusion within scenes, such as indirect illumination, ambient occlusion, and environment lighting. Their computationally most difficult part is determining light source visibility at each visible scene point. Height fields, on the other hand, constitute an important special case of geometry and are mainly used to describe certain types of objects such as terrains and to map detailed geometry onto object surfaces. The geometry of an entire scene can also be approximated by treating the distance values of its camera projection as a screen-space height field. In order to shadow height fields from environment lights a horizon map is usually used to occlude incident light. We reduce the per-receiver time complexity of generating the horizon map on N N height fields from O(N) of the previous work to O(1) by using an algorithm that incrementally traverses the height field and reuses the information already gathered along the path of traversal. We also propose an accurate method to integrate the incident light within the limits given by the horizon map. Indirect illumination in height fields requires information about which other points are visible to each height field point. We present an algorithm to determine this intervisibility in a time complexity that matches the space complexity of the produced visibility information, which is in contrast to previous methods which scale in the height field size. As a result the amount of computation is reduced by two orders of magnitude in common use cases. Screen-space ambient obscurance methods approximate ambient obscurance from the depth bu er geometry and have been widely adopted by contemporary real-time applications. They work by sampling the screen-space geometry around each receiver point but have been previously limited to near- field effects because sampling a large radius quickly exceeds the render time budget. We present an algorithm that reduces the quadratic per-pixel complexity of previous methods to a linear complexity by line sweeping over the depth bu er and maintaining an internal representation of the processed geometry from which occluders can be efficiently queried. Another algorithm is presented to determine ambient obscurance from the entire depth bu er at each screen pixel. The algorithm scans the depth bu er in a quick pre-pass and locates important features in it, which are then used to evaluate the ambient obscurance integral accurately. We also propose an evaluation of the integral such that results within a few percent of the ray traced screen-space reference are obtained at real-time render times.
Resumo:
In this research work, the aim was to investigate the volumetric mass transfer coefficient [kLa] of oxygen in stirred tank in the presence of solid particle experimentally. The kLa correlations as a function of propeller rotation speed and flow rate of gas feed were studied. The O2 and CO2 absorption in water and in solid-liquid suspensions and heterogeneous precipitation of MgCO3 were thoroughly examined. The absorption experiments of oxygen were conducted in various systems like pure water and in aqueous suspensions of quartz and calcium carbonate particles. Secondly, the precipitation kinetics of magnesium carbonate was also investigated. The experiments were performed to study the reactive crystallization with magnesium hydroxide slurry and carbon dioxide gas by varying the feed rates of carbon dioxide and rotation speeds of mixer. The results of absorption and precipitation are evaluated by titration, total carbon (TC analysis), and ionic chromatrography (IC). For calcium carbonate, the particle concentration was varied from 17.4 g to 2382 g with two size fractions: 5 µm and 45-63 µm sieves. The kLa and P/V values of 17.4 g CaCO3 with particle size of 5µm and 45-63 µm were 0.016 s-1 and 2400 W/m3. At 69.9 g concentration of CaCO3, the achieved kLa is 0.014 s-1 with particle size of 5 µm and 0.017 s-1 with particle size of 45 to 63 µm. Further increase in concentration of calcium carbonate, i.e. 870g and 2382g , does not affect volumetric mass transfer coeffienct of oxygen. It could be concluded from absorption results that maximum value of kLa is 0.016 s-1. Also particle size and concentration does affect the transfer rate to some extend. For precipitation experiments, the constant concentration of Mg(OH)2 was 100 g and the rotation speed varied from 560 to 750 rpm, whereas the used feed rates of CO2 were 1 and 9 L/min. At 560 rpm and feed rate of CO2 is 1 L/min, the maximum value of Mg ion and TC were 0.25 mol/litre and 0.12 mol/litre with the residence time of 40 min. When flow rate of CO2 increased to 9 L/min with same 560 rpm, the achieved value of Mg and TC were 0.3 mol/litre and 0.12 mol/L with shorter residence time of 30 min. It is concluded that feed rate of CO2 is dominant in precipitation experiments and it has a key role in dissociation and reaction of magnesium hydroxide in precipitation of magnesium carbonate.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The main objective of this research is to estimate and characterize heterogeneous mass transfer coefficients in bench- and pilot-scale fluidized bed processes by the means of computational fluid dynamics (CFD). A further objective is to benchmark the heterogeneous mass transfer coefficients predicted by fine-grid Eulerian CFD simulations against empirical data presented in the scientific literature. First, a fine-grid two-dimensional Eulerian CFD model with a solid and gas phase has been designed. The model is applied for transient two-dimensional simulations of char combustion in small-scale bubbling and turbulent fluidized beds. The same approach is used to simulate a novel fluidized bed energy conversion process developed for the carbon capture, chemical looping combustion operated with a gaseous fuel. In order to analyze the results of the CFD simulations, two one-dimensional fluidized bed models have been formulated. The single-phase and bubble-emulsion models were applied to derive the average gas-bed and interphase mass transfer coefficients, respectively. In the analysis, the effects of various fluidized bed operation parameters, such as fluidization, velocity, particle and bubble diameter, reactor size, and chemical kinetics, on the heterogeneous mass transfer coefficients in the lower fluidized bed are evaluated extensively. The analysis shows that the fine-grid Eulerian CFD model can predict the heterogeneous mass transfer coefficients quantitatively with acceptable accuracy. Qualitatively, the CFD-based research of fluidized bed process revealed several new scientific results, such as parametrical relationships. The huge variance of seven orders of magnitude within the bed Sherwood numbers presented in the literature could be explained by the change of controlling mechanisms in the overall heterogeneous mass transfer process with the varied process conditions. The research opens new process-specific insights into the reactive fluidized bed processes, such as a strong mass transfer control over heterogeneous reaction rate, a dominance of interphase mass transfer in the fine-particle fluidized beds and a strong chemical kinetic dependence of the average gas-bed mass transfer. The obtained mass transfer coefficients can be applied in fluidized bed models used for various engineering design, reactor scale-up and process research tasks, and they consequently provide an enhanced prediction accuracy of the performance of fluidized bed processes.
Resumo:
Terpenes are a valuable natural resource for the production of fine chemicals. Turpentine, obtained from biomass and also as a side product of softwood industry, is rich in monoterpenes such as α-pinene and β-pinene, which are widely used as raw materials in the synthesis of flavors, fragrances and pharmaceutical compounds. The rearrangement of their epoxides has been thoroughly studied in recent years, as a method to obtain compounds which are further used in the fine chemical industry. The industrially most desired products of α-pinene oxide isomerization are campholenic aldehyde and trans-carveol. Campholenic aldehyde is an intermediate for the manufacture of sandalwood-like fragrances such as santalol. Trans-carveol is an expensive constituent of the Valencia orange essence oil used in perfume bases and food flavor composition. Furthermore it has been found to exhibit chemoprevention of mammary carcinogenesis. A wide range of iron and ceria supported catalysts were prepared, characterized and tested for α-pinene oxide isomerization in order to selective synthesis of above mentioned products. The highest catalytic activity in the preparation of campholenic aldehyde over iron modified catalysts using toluene as a solvent at 70 °C (total conversion of α-pinene oxide with a selectivity of 66 % to the desired aldehyde) was achieved in the presence of Fe-MCM-41. Furthermore, Fe-MCM-41 catalyst was successfully regenerated without deterioration of catalytic activity and selectivity. The most active catalysts in the synthesis of trans-carveol from α-pinene oxide over iron and ceria modified catalysts in N,N-dimethylacetamide as a solvent at 140 °C (total conversion of α-pinene oxide with selectivity 43 % to trans-carveol) were Fe-Beta-300 and Ce-Si-MCM-41. These catalysts were further tested for an analogous reaction, namely verbenol oxide isomerization. Verbenone is another natural organic compound which can be found in a variety of plants or synthesized by allylic oxidation of α-pinene. An interesting product which is synthesized from verbenone is (1R,2R,6S)-3-methyl-6-(prop-1-en-2-yl)cyclohex-3-ene-1,2-diol. It has been discovered that this diol possesses potent anti-Parkinson activity. The most effective way leading to desired diol starts from verbenone and includes three stages: epoxidation of verbenone to verbenone oxide, reduction of verbenone oxide and subsequent isomerization of obtained verbenol oxide, which is analogous to isomerization of α-pinene oxide. In the research focused on the last step of these synthesis, high selectivity (82 %) to desired diol was achieved in the isomerization of verbenol oxide at a conversion level of 96 % in N,N-dimethylacetamide at 140 °C using iron modified zeolite, Fe-Beta-300. This reaction displayed surprisingly high selectivity, which has not been achieved yet. The possibility of the reuse of heterogeneous catalysts without activity loss was demonstrated.
Resumo:
Cancer cachexia causes disruption of lipid metabolism. Since it has been well established that the various adipose tissue depots demonstrate different responses to stimuli, we assessed the effect of cachexia on some biochemical and morphological parameters of adipocytes obtained from the mesenteric (MES), retroperitoneal (RPAT), and epididymal (EAT) adipose tissues of rats bearing Walker 256 carcinosarcoma, compared with controls. Relative weight and total fat content of tissues did not differ between tumor-bearing rats and controls, but fatty acid composition was modified by cachexia. Adipocyte dimensions were increased in MES and RPAT from tumor-bearing rats, but not in EAT, in relation to control. Ultrastructural alterations were observed in the adipocytes of tumor-bearing rat RPAT (membrane projections) and EAT (nuclear bodies).
Resumo:
Clinical decision support systems are useful tools for assisting physicians to diagnose complex illnesses. Schizophrenia is a complex, heterogeneous and incapacitating mental disorder that should be detected as early as possible to avoid a most serious outcome. These artificial intelligence systems might be useful in the early detection of schizophrenia disorder. The objective of the present study was to describe the development of such a clinical decision support system for the diagnosis of schizophrenia spectrum disorders (SADDESQ). The development of this system is described in four stages: knowledge acquisition, knowledge organization, the development of a computer-assisted model, and the evaluation of the system's performance. The knowledge was extracted from an expert through open interviews. These interviews aimed to explore the expert's diagnostic decision-making process for the diagnosis of schizophrenia. A graph methodology was employed to identify the elements involved in the reasoning process. Knowledge was first organized and modeled by means of algorithms and then transferred to a computational model created by the covering approach. The performance assessment involved the comparison of the diagnoses of 38 clinical vignettes between an expert and the SADDESQ. The results showed a relatively low rate of misclassification (18-34%) and a good performance by SADDESQ in the diagnosis of schizophrenia, with an accuracy of 66-82%. The accuracy was higher when schizophreniform disorder was considered as the presence of schizophrenia disorder. Although these results are preliminary, the SADDESQ has exhibited a satisfactory performance, which needs to be further evaluated within a clinical setting.
Resumo:
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.