942 resultados para Climatic data simulation
Resumo:
The aim of this study was to quantify the water consumption and the crop coefficients (Kc) for the potato (Solanum tuberosum L.), in Seropédica, Rio de Janeiro (RJ), Brazil, under organic management, and to simulate the crop evapotranspiration (ETc) using the Kc obtained in the field and the ones recommended by the Food and Agriculture Organization (FAO). The water consumption was obtained through soil water balance, using TDR probes installed at 0.15m and 0.30m deep. At the different stages of development, the Kc was determined by the ratio of ETc and reference evapotranspiration, obtained by Penman-Monteith FAO 56. The crop coefficients obtained were 0.35, 0.45, 1.29 and 0.63. The accumulated ETc obtained in the field was 109.6 mm, while the ETc accumulated from FAO's Kc were 142.2 and 138mm, respectively, considering the classical values and the values adjusted to the local climatic conditions. The simulation of water consumption based on meteorological data of historical series from 1961 to 2007 provided higher value of ETc when compared with the one obtained in the field. From the meteorological data of historical series, it was observed that the use of Kc recommended by FAO may overestimate the amount of irrigation water by 9%, over the same growing season.
Resumo:
Based on experimental tests, it was obtained the equations for drying, equilibrium moisture content, latent heat of vaporization of water contained in the product and the equation of specific heat of cassava starch pellets, essential parameters for realizing modeling and mathematical simulation of mechanical drying of cassava starch for a new technique proposed, consisting of preformed by pelleting and subsequent artificial drying of starch pellets. Drying tests were conducted in an experimental chamber by varying the air temperature, relative humidity, air velocity and product load. The specific heat of starch was determined by differential scanning calorimetry. The generated equations were validated through regression analysis, finding an appropriate correlation of the data, which indicates that by using these equations, can accurately model and simulate the drying process of cassava starch pellets.
Resumo:
ABSTRACT In animal farming, an automatic and precise control of environmental conditions needs information from variables derived from the animals themselves, i.e. they act as biosensors. Rectal temperature (RT) and respiratory rate (RR) are good indicators of thermoregulation in pigs. Since there is a growing concern on animal welfare, the search for alternatives to measure RT has become even more necessary. This research aimed to identify the most adequate body surface areas, on nursery-phase pigs, to take temperature measurements that best represent the correlation of RT and RR. The main experiment was carried out in a climate chamber with five 30-day-old littermate female Landrace x Large White piglets. Temperature conditions inside chamber were varied from 14 °C up to 35.5 °C. The measurements were taken each 30 minutes, over six different skin regions, using a temperature data logger Thermochron iButton® - DS1921G (Tb) and an infrared thermometer (Ti). As shown by the results, the tympanic region is the best one for RT and RR monitoring using an infrared thermometer (TiF). In contrast, when using temperature sensors, the ear (TbE) is preferred to be used for RT predictions and the loin region (TbC) for RR.
Resumo:
ABSTRACT Global warming increases the occurrence of events such as extreme heat waves. Research on thermal and air conditions affecting broiler-rearing environment are important to evaluate the animal welfare under extreme heat aiming mitigation measures. This study aimed at evaluating the effect of a simulated heat wave, in a climatic chamber, on the thermal and air environment of 42-day-old broilers. One hundred and sixty broilers were housed and reared for 42 days in a climatic chamber; the animals were divided into eight pens. Heat wave simulation was performed on the 42nd day, the period of great impact and data sampling. The analyzed variables were room and litter temperatures, relative humidity, concentrations of oxygen, carbon monoxide and ammonia at each pen. These variables were assessed each two hours, starting at 8 am, simulating a day heating up to 4 pm, when it is reached the maximum temperature. By the results, we concluded that increasing room temperatures promoted a proportional raise in litter temperatures, contributing to ammonia volatilization. In addition, oxygen concentrations decreased with increasing temperatures; and the carbon monoxide was only observed at temperatures above 27.0 °C, relative humidity higher than 88.4% and litter temperatures superior to 30.3 °C.
Resumo:
The aim of this dissertation is to investigate if participation in business simulation gaming sessions can make different leadership styles visible and provide students with experiences beneficial for the development of leadership skills. Particularly, the focus is to describe the development of leadership styles when leading virtual teams in computer-supported collaborative game settings and to identify the outcomes of using computer simulation games as leadership training tools. To answer to the objectives of the study, three empirical experiments were conducted to explore if participation in business simulation gaming sessions (Study I and II), which integrate face-to-face and virtual communication (Study III and IV), can make different leadership styles visible and provide students with experiences beneficial for the development of leadership skills. In the first experiment, a group of multicultural graduate business students (N=41) participated in gaming sessions with a computerized business simulation game (Study III). In the second experiment, a group of graduate students (N=9) participated in the training with a ‘real estate’ computer game (Study I and II). In the third experiment, a business simulation gaming session was organized for graduate students group (N=26) and the participants played the simulation game in virtual teams, which were organizationally and geographically dispersed but connected via technology (Study IV). Each team in all experiments had three to four students and students were between 22 and 25 years old. The business computer games used for the empirical experiments presented an enormous number of complex operations in which a team leader needed to make the final decisions involved in leading the team to win the game. These gaming environments were interactive;; participants interacted by solving the given tasks in the game. Thus, strategy and appropriate leadership were needed to be successful. The training was competition-based and required implementation of leadership skills. The data of these studies consist of observations, participants’ reflective essays written after the gaming sessions, pre- and post-tests questionnaires and participants’ answers to open- ended questions. Participants’ interactions and collaboration were observed when they played the computer games. The transcripts of notes from observations and students dialogs were coded in terms of transactional, transformational, heroic and post-heroic leadership styles. For the data analysis of the transcribed notes from observations, content analysis and discourse analysis was implemented. The Multifactor Leadership Questionnaire (MLQ) was also utilized in the study to measure transformational and transactional leadership styles;; in addition, quantitative (one-way repeated measures ANOVA) and qualitative data analyses have been performed. The results of this study indicate that in the business simulation gaming environment, certain leadership characteristics emerged spontaneously. Experiences about leadership varied between the teams and were dependent on the role individual students had in their team. These four studies showed that simulation gaming environment has the potential to be used in higher education to exercise the leadership styles relevant in real-world work contexts. Further, the study indicated that given debriefing sessions, the simulation game context has much potential to benefit learning. The participants who showed interest in leadership roles were given the opportunity of developing leadership skills in practice. The study also provides evidence of unpredictable situations that participants can experience and learn from during the gaming sessions. The study illustrates the complex nature of experiences from the gaming environments and the need for the team leader and role divisions during the gaming sessions. It could be concluded that the experience of simulation game training illustrated the complexity of real life situations and provided participants with the challenges of virtual leadership experiences and the difficulties of using leadership styles in practice. As a result, the study offers playing computer simulation games in small teams as one way to exercise leadership styles in practice.
Resumo:
The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.
Resumo:
Computational model-based simulation methods were developed for the modelling of bioaffinity assays. Bioaffinity-based methods are widely used to quantify a biological substance in biological research, development and in routine clinical in vitro diagnostics. Bioaffinity assays are based on the high affinity and structural specificity between the binding biomolecules. The simulation methods developed are based on the mechanistic assay model, which relies on the chemical reaction kinetics and describes the forming of a bound component as a function of time from the initial binding interaction. The simulation methods were focused on studying the behaviour and the reliability of bioaffinity assay and the possibilities the modelling methods of binding reaction kinetics provide, such as predicting assay results even before the binding reaction has reached equilibrium. For example, a rapid quantitative result from a clinical bioaffinity assay sample can be very significant, e.g. even the smallest elevation of a heart muscle marker reveals a cardiac injury. The simulation methods were used to identify critical error factors in rapid bioaffinity assays. A new kinetic calibration method was developed to calibrate a measurement system by kinetic measurement data utilizing only one standard concentration. A nodebased method was developed to model multi-component binding reactions, which have been a challenge to traditional numerical methods. The node-method was also used to model protein adsorption as an example of nonspecific binding of biomolecules. These methods have been compared with the experimental data from practice and can be utilized in in vitro diagnostics, drug discovery and in medical imaging.
Resumo:
Avsikten med studien är att producera den första rekonstruktionen av det västafrikanska klimatet mellan 1750 och 1798. Kunskapen om det västafrikanska klimatet före 1800-talet är till dags dato bristfällig, vilket gör det svårare att förstå framtida klimatvariationer. Det är bristen på instrumentell meteorologisk data (temperatur, regnmängd, och lufttryck), vilket i princip bara täcker det senaste århundradet, som är orsaken till att tidigare klimat är bristfälligt kartlagda. Klimatet och miljön är även sådana att proxydata från ’naturens arkiv’ (såsom t.ex. trädringar) har begränsad användning. Således är historiska dokument, främst från gästande kulturer/nationer/intressenter, innehållande deskriptiv information om vädret och klimatet, klimatforskarens viktigaste källa. Genom att använda tidigare, för det här syftet, oanvända källor påvisade den här undersökningen att klimatet i västra Afrika och Guldkusten (Ghana) har ändrats sedan 1700-talet. Monsunregnen var svagare och kortvarigare, speciellt den sekundära regnperioden under hösten var betydligt svagare än idag. Det förekom kraftiga årliga variationer i monsunregnen, men sett ur längre tidsperspektiv utmärktes torrare och blötare perioder. Studien kunde också visa en viss korrelation mellan det globala väderfenomenet El Niño och regnperiodens intensitet längs med kusten. Flera torrperioder sammanföll med tidigare registrerade El Niño sekvenser. Speciellt slutet av 1760-talet påverkades kraftigt av El Niño och även det globala klimatet verkar ha genomgått graftiga förändringar just dessa år. På basis av den nya klimatrekonstruktionen genomfördes också en jämförelse av klimatets inverkan på den transatlantiska slavhandeln från 1750 till 1798, en fråga som historikerna gjort anspelningar på i över 30 år. Utförseln av slavar från västra Afrika var som kraftigast under 1700-talets andra hälft. Analysen visade att slavhandeln delvis tilltog i samband med klimatanomalierna.
Resumo:
The main objective of this work is to analyze the importance of the gas-solid interface transfer of the kinetic energy of the turbulent motion on the accuracy of prediction of the fluid dynamic of Circulating Fluidized Bed (CFB) reactors. CFB reactors are used in a variety of industrial applications related to combustion, incineration and catalytic cracking. In this work a two-dimensional fluid dynamic model for gas-particle flow has been used to compute the porosity, the pressure, and the velocity fields of both phases in 2-D axisymmetrical cylindrical co-ordinates. The fluid dynamic model is based on the two fluid model approach in which both phases are considered to be continuous and fully interpenetrating. CFB processes are essentially turbulent. The model of effective stress on each phase is that of a Newtonian fluid, where the effective gas viscosity was calculated from the standard k-epsilon turbulence model and the transport coefficients of the particulate phase were calculated from the kinetic theory of granular flow (KTGF). This work shows that the turbulence transfer between the phases is very important for a better representation of the fluid dynamics of CFB reactors, especially for systems with internal recirculation and high gradients of particle concentration. Two systems with different characteristics were analyzed. The results were compared with experimental data available in the literature. The results were obtained by using a computer code developed by the authors. The finite volume method with collocated grid, the hybrid interpolation scheme, the false time step strategy and SIMPLEC (Semi-Implicit Method for Pressure Linked Equations - Consistent) algorithm were used to obtain the numerical solution.
Resumo:
The present work shows how thick boundary layers can be produced in a short wind tunnel with a view to simulate atmospheric flows. Several types of thickening devices are analysed. The experimental assessment of the devices was conducted by considering integral properties of the flow and the spectra: skin-friction, mean velocity profiles in inner and outer co-ordinates and longitudinal turbulence. Designs based on screens, elliptic wedge generators, and cylindrical rod generators are analysed. The paper describes in detail the experimental arrangement, including the features of the wind tunnel and of the instrumentation. The results are compared with experimental data published by other authors and with naturally developed flows.
Resumo:
This paper concerns the development of drives that use electromechanical rotative motor systems. It is proposed an experimental drive test structure integrated to simulation softwares. The objective of this work is to show that an affordable model validation procedure can be obtained by combining a precision data acquisition with well tuned state-of-the-art simulation packages. This is required for fitting, in the best way, a drive to its load or, inversely, to adapt loads to given drive characteristics.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
Stochastic differential equation (SDE) is a differential equation in which some of the terms and its solution are stochastic processes. SDEs play a central role in modeling physical systems like finance, Biology, Engineering, to mention some. In modeling process, the computation of the trajectories (sample paths) of solutions to SDEs is very important. However, the exact solution to a SDE is generally difficult to obtain due to non-differentiability character of realizations of the Brownian motion. There exist approximation methods of solutions of SDE. The solutions will be continuous stochastic processes that represent diffusive dynamics, a common modeling assumption for financial, Biology, physical, environmental systems. This Masters' thesis is an introduction and survey of numerical solution methods for stochastic differential equations. Standard numerical methods, local linearization methods and filtering methods are well described. We compute the root mean square errors for each method from which we propose a better numerical scheme. Stochastic differential equations can be formulated from a given ordinary differential equations. In this thesis, we describe two kind of formulations: parametric and non-parametric techniques. The formulation is based on epidemiological SEIR model. This methods have a tendency of increasing parameters in the constructed SDEs, hence, it requires more data. We compare the two techniques numerically.
Resumo:
The main objective of this master’s thesis is to examine if Weibull analysis is suitable method for warranty forecasting in the Case Company. The Case Company has used Reliasoft’s Weibull++ software, which is basing on the Weibull method, but the Company has noticed that the analysis has not given right results. This study was conducted making Weibull simulations in different profit centers of the Case Company and then comparing actual cost and forecasted cost. Simula-tions were made using different time frames and two methods for determining future deliveries. The first sub objective is to examine, which parameters of simulations will give the best result to each profit center. The second sub objective of this study is to create a simple control model for following forecasted costs and actual realized costs. The third sub objective is to document all Qlikview-parameters of profit centers. This study is a constructive research, and solutions for company’s problems are figured out in this master’s thesis. In the theory parts were introduced quality issues, for example; what is quality, quality costing and cost of poor quality. Quality is one of the major aspects in the Case Company, so understand-ing the link between quality and warranty forecasting is important. Warranty management was also introduced and other different tools for warranty forecasting. The Weibull method and its mathematical properties and reliability engineering were introduced. The main results of this master’s thesis are that the Weibull analysis forecasted too high costs, when calculating provision. Although, some forecasted values of profit centers were lower than actual values, the method works better for planning purposes. One of the reasons is that quality improving or alternatively quality decreasing is not showing in the results of the analysis in the short run. The other reason for too high values is that the products of the Case Company are com-plex and analyses were made in the profit center-level. The Weibull method was developed for standard products, but products of the Case Company consists of many complex components. According to the theory, this method was developed for homogeneous-data. So the most im-portant notification is that the analysis should be made in the product level, not the profit center level, when the data is more homogeneous.
Resumo:
The objective of this master’s thesis was to design and simulate a wind powered hydraulic heating system that can operate independently in remote places where the use of electricity is not possible. Components for the system were to be selected in such a way that the conditions for manufacture, use and economic viability are the as good as possible. Savonius rotor was chosen for wind turbine, due to its low cut in speed and robust design. Savonius rotor produces kinetic energy in wide wind speed range and it can withstand high wind gusts. Radial piston pump was chosen for the flow source of the hydraulic heater. Pump type was selected due to its characteristics in low rotation speeds and high efficiency. Volume flow from the pump is passed through the throttle orifice. Pressure drop over the orifice causes the hydraulic oil to heat up and, thus, creating thermal energy. Thermal energy in the oil is led to radiator where it conducts heat to the environment. The hydraulic heating system was simulated. For this purpose a mathematical models of chosen components were created. In simulation wind data gathered by Finnish meteorological institute for 167 hours was used as input. The highest produced power was achieved by changing the orifice diameter so that the rotor tip speed ratio follows the power curve. This is not possible to achieve without using electricity. Thus, for the orifice diameter only one, the optimal value was defined. Results from the simulation were compared with investment calculations. Different parameters effecting the investment profitability were altered in sensitivity analyses in order to define the points of investment profitability. Investment was found to be profitable only with high average wind speeds.