922 resultados para Traditional enrichment method
Resumo:
Fish sauce is a popular fermented product used in south Asian countries which is made from different small fishes in this research work it was attempted to produce fish sauce from kilka of the Caspian sea, the fish sauce was made from three models of kilka ,such as whole kilka , cooked whole kilka and dressed kilka , each of these models treated it four different fashions of fermentation such as:1- Traditional method, 2- Enzymatic method 3- Microbial method, 4- Mixture of enzyme and microb The results of this investigation showed that time of fermentation for the traditional method was six month, enzymatic method one month, microbial method 3 month and the mixture of enzyme and microb 1 month. The rate of fermentation was least for dressed Kilka, microbial and biochemical changes of Kilka fish sauce were evaluated, total bacterial count was 2.1-6.15 log cfu/ml total volatile nitrogen (TVN) in samples recorded was 250 mg /100g, the amount of protein varied between 10-13 percent, the name of commercial enzymes added was Protamex and Flavourzyme, the bacteria added was L act ob acillus and Pediococous, fish sauce containers fish and 20% salt, temperature of keeping for fermentation was 37 degree c for 6 month.
Resumo:
A obtenção de marcas genéticas, quer sejam para resistência a drogas, quer para auxotrofia, é uma etapa trabalhosa mas importante em pesquisa genética. Esse trabalho visou a obtenção de mutantes auxotróficos de Trichoderma harzianum utilizando-se a técnica de enriquecimento por filtração. A técnica mostrou-se superior à técnica convencional de isolamento total. Doze mutantes auxotróficos obtidos foram testados quanto a estabilidade, crescimento e resistência ao fungicida benomil. Eles apresentaram taxas de crescimento e esporulação comparáveis à linhagem parental e dois mutantes foram resistentes a benomil em uma concentração de 500µg/ml.
Resumo:
Abstract Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention.
Resumo:
The effect of a traditional Ethiopian lupin processing method on the chemical composition of lupin seed samples was studied. Two sampling districts, namely Mecha and Sekela, representing the mid- and high-altitude areas of north-western Ethiopia, respectively, were randomly selected. Different types of traditionally processed and marketed lupin seed samples (raw, roasted, and fi nished) were collected in six replications from each district. Raw samples are unprocessed, and roasted samples are roasted using fi rewood. Finished samples are those ready for human consumption as snack. Thousand seed weight for raw and roasted samples within a study district was similar (P > 0.05), but it was lower (P < 0.01) for fi nished samples compared to raw and roasted samples. The crude fi bre content of fi nished lupin seed sample from Mecha was lower (P < 0.01) than that of raw and roasted samples. However, the different lupin samples from Sekela had similar crude fi bre content (P > 0.05). The crude protein and crude fat contents of fi nished samples within a study district were higher (P < 0.01) than those of raw and roasted samples, respectively. Roasting had no effect on the crude protein content of lupin seed samples. The crude ash content of raw and roasted lupin samples within a study district was higher (P < 0.01) than that of fi nished lupin samples of the respective study districts. The content of quinolizidine alkaloids of fi nished lupin samples was lower than that of raw and roasted samples. There was also an interaction effect between location and lupin sample type. The traditional processing method of lupin seeds in Ethiopia has a positive contribution improving the crude protein and crude fat content, and lowering the alkaloid content of the fi nished product. The study showed the possibility of adopting the traditional processing method to process bitter white lupin for the use as protein supplement in livestock feed in Ethiopia, but further work has to be done on the processing method and animal evaluation.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
For wind farm optimizations with lands belonging to different owners, the traditional penalty method is highly dependent on the type of wind farm land division. The application of the traditional method can be cumbersome if the divisions are complex. To overcome this disadvantage, a new method is proposed in this paper for the first time. Unlike the penalty method which requires the addition of penalizing term when evaluating the fitness function, it is achieved through repairing the infeasible solutions before fitness evaluation. To assess the effectiveness of the proposed method on the optimization of wind farm, the optimizing results of different methods are compared for three different types of wind farm division. Different wind scenarios are also incorporated during optimization which includes (i) constant wind speed and wind direction; (ii) various wind speed and wind direction, and; (iii) the more realisticWeibull distribution. Results show that the performance of the new method varies for different land plots in the tested cases. Nevertheless, it is found that optimum or at least close to optimum results can be obtained with sequential land plot study using the new method for all cases. It is concluded that satisfactory results can be achieved using the proposed method. In addition, it has the advantage of flexibility in managing the wind farm design, which not only frees users to define the penalty parameter but without limitations on the wind farm division.
Resumo:
A main method of predicting turbulent flows is to solve LES equations, which was called traditional LES method. The traditional LES method solves the motions of large eddies of size larger than filtering scale An while modeling unresolved scales less than Delta_n. Hughes et al argued that many shortcomings of the traditional LES approaches were associated with their inabilities to successfully differentiate between large and small scales. One may guess that a priori scale-separation would be better, because it can predict scale-interaction well compared with posteriori scale-separation. To this end, a multi-scale method was suggested to perform scale-separation computation. The primary contents of the multiscale method are l) A space average is used to differentiate scale. 2) The basic equations include the large scale equations and fluctuation equations. 3) The large-scale equations and fluctuation equations are coupled through turbulent stress terms. We use the multiscale equations of n=2, i.e., the large and small scale (LSS) equations, to simulate 3-D evolutions of a channel flow and a planar mixing layer flow Some interesting results are given.
Resumo:
In this article, we report on an approach of using an emulsion polymerized polymer in preparing organic-inorganic nanocomposites through a sol-gel technique. By mixing a polymer emulsion with prehydrolyzed tetraethoxysilane transparent poly(butyl methacrylate)/SiO2, nanocomposites were prepared as shown by TEM. AFM, FTIR, and XPS results show that there is a strong interaction between polymer latex particles and the SiO2 network. Comparison of the emulsion method with a traditional solution method shows that nanocomposites can be prepared by both methods, but there is some difference in their morphology and properties.
Resumo:
Arsenic (As) contamination of rice plants can result in high total As concentrations (t-As) in cooked rice, especially if As-contaminated water is used for cooking. This study examines two variables: (1) the cooking method (water volume and inclusion of a washing step); and (2) the rice type (atab and boiled). Cooking water and raw atab and boiled rice contained 40 g As l-1 and 185 and 315 g As kg-1, respectively. In general, all cooking methods increased t-As from the levels in raw rice; however, raw boiled rice decreased its t-As by 12.7% when cooked by the traditional method, but increased by 15.9% or 23.5% when cooked by the intermediate or contemporary methods, respectively. Based on the best possible scenario (the traditional cooking method leading to the lowest level of contamination, and the atab rice type with the lowest As content), t-As daily intake was estimated to be 328 g, which was twice the tolerable daily intake of 150 g.
Resumo:
Tese de dout., Química, Faculdade de Ciências e Tecnologia, Univ. do Algarve, 2012
Resumo:
[EN] Brine shrimp nauplii (Artemia sp.) are used in aquaculture as the major food source for many cultured marine larvae, and also used in the adult phase for many juvenile and adult fish. One artemia species, Artemia franciscana is most commonly preferred, due to the availability of its cysts and to its ease in hatching and biomass production. The problem with A. franciscana is that its nutritional quality is relatively poor in essential fatty acids, so that it is common practice to enrich it with emulsions like SELCO and ORIGO. This “bioencapsulation”, enrichment method permits the incorporation of different kinds of products into the artemia. This brine-shrimp’s non-selective particle-feeding habits, makes it particularly suitable for this enrichment process. The bioencapsulation is done just prior to feeding the artemia to a predator organism. This allows the delivery of different substances, not only for nutrient enrichment, but also for changing pigmentation and administering medicine. This is especially useful in culturing ornamental seahorses and tropical fish in marine aquaria In this study the objectives were to determine, the relative nutrient value of ORIGO and SELCO as well as the optimal exposure to these supplements prior to their use as food-organisms.
Resumo:
The traditional Newton method for solving nonlinear operator equations in Banach spaces is discussed within the context of the continuous Newton method. This setting makes it possible to interpret the Newton method as a discrete dynamical system and thereby to cast it in the framework of an adaptive step size control procedure. In so doing, our goal is to reduce the chaotic behavior of the original method without losing its quadratic convergence property close to the roots. The performance of the modified scheme is illustrated with various examples from algebraic and differential equations.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
In the university education arena, it is becoming apparent that traditional methods of conducting classes are not the most effective ways to achieve desired learning outcomes. The traditional class/method involves the instructor verbalizing information for passive, note-taking students who are assumed to be empty receptacles waiting to be filled with knowledge. This method is limited in its effectiveness, as the flow of information is usually only in one direction. Furthermore, “It has been demonstrated that students in many cases can recite and apply formulas in numerical problems, but the actual meaning and understanding of the concept behind the formula is not acquired (Crouch & Mazur)”. It is apparent that memorization is the main technique present in this approach. A more effective method of teaching involves increasing the students’ level of activity during, and hence their involvement in the learning process. This technique stimulates self- learning and assists in keeping these students’ levels of concentration more uniform. In this work, I am therefore interested in studying the influence of a particular TLA on students’ learning-outcomes. I want to foster high-level understanding and critical thinking skills using active learning (Silberman, 1996) techniques. The TLA in question aims to promote self-study by students and to expose them to a situation where their learning-outcomes can be tested. The motivation behind this activity is based on studies which suggest that some sensory modalities are more effective than others. Using various instruments for data collection and by means of a thorough analysis I present evidence of the effectiveness of this action research project which aims to improve my own teaching practices, with the ultimate goal of enhancing student’s learning.
Resumo:
Hot spot identification (HSID) aims to identify potential sites—roadway segments, intersections, crosswalks, interchanges, ramps, etc.—with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset.