984 resultados para Discrete mass modeling
Resumo:
In the forced-air cooling process of fruits occurs, besides the convective heat transfer, the mass transfer by evaporation. The energy need in the evaporation is taken from fruit that has its temperature lowered. In this study it has been proposed the use of empirical correlations for calculating the convective heat transfer coefficient as a function of surface temperature of the strawberry during the cooling process. The aim of this variation of the convective coefficient is to compensate the effect of evaporation in the heat transfer process. Linear and exponential correlations are tested, both with two adjustable parameters. The simulations are performed using experimental conditions reported in the literature for the cooling of strawberries. The results confirm the suitability of the proposed methodology.
Resumo:
This study aimed to apply mathematical models to the growth of Nile tilapia (Oreochromis niloticus) reared in net cages in the lower São Francisco basin and choose the model(s) that best represents the conditions of rearing for the region. Nonlinear models of Brody, Bertalanffy, Logistic, Gompertz, and Richards were tested. The models were adjusted to the series of weight for age according to the methods of Gauss, Newton, Gradiente and Marquardt. It was used the procedure "NLIN" of the System SAS® (2003) to obtain estimates of the parameters from the available data. The best adjustment of the data were performed by the Bertalanffy, Gompertz and Logistic models which are equivalent to explain the growth of the animals up to 270 days of rearing. From the commercial point of view, it is recommended that commercialization of tilapia from at least 600 g, which is estimated in the Bertalanffy, Gompertz and Logistic models for creating over 183, 181 and 184 days, and up to 1 Kg of mass , it is suggested the suspension of the rearing up to 244, 244 and 243 days, respectively.
Resumo:
Detta arbete fokuserar på modellering av katalytiska gas-vätskereaktioner som genomförs i kontinuerliga packade bäddar. Katalyserade gas-vätskereaktioner hör till de mest typiska reaktionerna i kemisk industri; därför behandlas här packade bäddreaktorer som ett av de populäraste alternativen, då kontinuerlig drift eftersträvas. Tack vare en stor katalysatormängd per volym har de en kompakt struktur, separering av katalysatorn behövs inte och genom en professionell design kan den mest fördelaktiga strömningsbilden upprätthållas i reaktorn. Packade bäddreaktorer är attraktiva p.g.a. lägre investerings- och driftskostnader. Även om packade bäddar används intensivt i industri, är det mycket utmanande att modellera. Detta beror på att tre faser samexisterar och systemets geometri är komplicerad. Existensen av flera reaktioner gör den matematiska modelleringen även mera krävande. Många förenklingar blir därmed nödvändiga. Modellerna involverar typiskt flera parametrar som skall justeras på basis av experimentella data. I detta arbete studerades fem olika reaktionssystem. Systemen hade studerats experimentellt i vårt laboratorium med målet att nå en hög produktivitet och selektivitet genom ett optimalt val av katalysatorer och driftsbetingelser. Hydrering av citral, dekarboxylering av fettsyror, direkt syntes av väteperoxid samt hydrering av sockermonomererna glukos och arabinos användes som exempelsystem. Även om dessa system hade mycket gemensamt, hade de också unika egenskaper och krävde därför en skräddarsydd matematisk behandling. Citralhydrering var ett system med en dominerande huvudreaktion som producerar citronellal och citronellol som huvudprodukter. Produkterna används som en citrondoftande komponent i parfymer, tvålar och tvättmedel samt som plattform-kemikalier. Dekarboxylering av stearinsyra var ett specialfall, för vilket en reaktionsväg för produktion av långkedjade kolväten utgående från fettsyror söktes. En synnerligen hög produktselektivitet var karakteristisk för detta system. Även processuppskalning modellerades för dekarboxylerings-reaktionen. Direkt syntes av väteperoxid hade som målsättning att framta en förenklad process att producera väteperoxid genom att låta upplöst väte och syre reagera direkt i ett lämpligt lösningsmedel på en aktiv fast katalysator. I detta system förekommer tre bireaktioner, vilka ger vatten som oönskad produkt. Alla dessa tre reaktioner modellerades matematiskt med hjälp av dynamiska massbalanser. Målet med hydrering av glukos och arabinos är att framställa produkter med en hög förädlingsgrad, nämligen sockeralkoholer, genom katalytisk hydrering. För dessa två system löstes ämnesmängd- och energibalanserna simultant för att evaluera effekter inne i porösa katalysatorpartiklar. Impulsbalanser som bestämmer strömningsbetingelser inne i en kemisk reaktor, ersattes i alla modelleringsstudier med semi-empiriska korrelationsuttryck för vätskans volymandel och tryckförlust och med axiell dispersionsmodell för beskrivning av omblandningseffekter. Genom att justera modellens parametrar kunde reaktorns beteende beskrivas väl. Alla experiment var genomförda i laboratorieskala. En stor mängd av kopplade effekter samexisterade: reaktionskinetik inklusive adsorption, katalysatordeaktivering, mass- och värmeöverföring samt strömningsrelaterade effekter. En del av dessa effekter kunde studeras separat (t.ex. dispersionseffekter och bireaktioner). Inverkan av vissa fenomen kunde ibland minimeras genom en noggrann planering av experimenten. På detta sätt kunde förenklingar i modellerna bättre motiveras. Alla system som studerades var industriellt relevanta. Utveckling av nya, förenklade produktionsteknologier för existerande kemiska komponenter eller nya komponenter är ett gigantiskt uppdrag. Studierna som presenterades här fokuserade på en av den teknisk-vetenskapliga utfärdens första etapper.
Resumo:
Investigation of high pressure pretreatment process for gold leaching is the objective of the present master's thesis. The gold ores and concentrates which cannot be easily treated by leaching process are called "refractory". These types of ores or concentrates often have high content of sulfur and arsenic that renders the precious metal inaccessible to the leaching agents. Since the refractory ores in gold manufacturing industry take a considerable share, the pressure oxidation method (autoclave method) is considered as one of the possible ways to overcome the related problems. Mathematical modeling is the main approach in this thesis which was used for investigation of high pressure oxidation process. For this task, available information from literature concerning this phenomenon, including chemistry, mass transfer and kinetics, reaction conditions, applied apparatus and application, was collected and studied. The modeling part includes investigation of pyrite oxidation kinetics in order to create a descriptive mathematical model. The following major steps are completed: creation of process model by using the available knowledge; estimation of unknown parameters and determination of goodness of the fit; study of the reliability of the model and its parameters.
Resumo:
In the present work, liquid-solid flow in industrial scale is modeled using the commercial software of Computational Fluid Dynamics (CFD) ANSYS Fluent 14.5. In literature, there are few studies on liquid-solid flow in industrial scale, but any information about the particular case with modified geometry cannot be found. The aim of this thesis is to describe the strengths and weaknesses of the multiphase models, when a large-scale application is studied within liquid-solid flow, including the boundary-layer characteristics. The results indicate that the selection of the most appropriate multiphase model depends on the flow regime. Thus, careful estimations of the flow regime are recommended to be done before modeling. The computational tool is developed for this purpose during this thesis. The homogeneous multiphase model is valid only for homogeneous suspension, the discrete phase model (DPM) is recommended for homogeneous and heterogeneous suspension where pipe Froude number is greater than 1.0, while the mixture and Eulerian models are able to predict also flow regimes, where pipe Froude number is smaller than 1.0 and particles tend to settle. With increasing material density ratio and decreasing pipe Froude number, the Eulerian model gives the most accurate results, because it does not include simplifications in Navier-Stokes equations like the other models. In addition, the results indicate that the potential location of erosion in the pipe depends on material density ratio. Possible sedimentation of particles can cause erosion and increase pressure drop as well. In the pipe bend, especially secondary flows, perpendicular to the main flow, affect the location of erosion.
Resumo:
Recently, due to the increasing total construction and transportation cost and difficulties associated with handling massive structural components or assemblies, there has been increasing financial pressure to reduce structural weight. Furthermore, advances in material technology coupled with continuing advances in design tools and techniques have encouraged engineers to vary and combine materials, offering new opportunities to reduce the weight of mechanical structures. These new lower mass systems, however, are more susceptible to inherent imbalances, a weakness that can result in higher shock and harmonic resonances which leads to poor structural dynamic performances. The objective of this thesis is the modeling of layered sheet steel elements, to accurately predict dynamic performance. During the development of the layered sheet steel model, the numerical modeling approach, the Finite Element Analysis and the Experimental Modal Analysis are applied in building a modal model of the layered sheet steel elements. Furthermore, in view of getting a better understanding of the dynamic behavior of layered sheet steel, several binding methods have been studied to understand and demonstrate how a binding method affects the dynamic behavior of layered sheet steel elements when compared to single homogeneous steel plate. Based on the developed layered sheet steel model, the dynamic behavior of a lightweight wheel structure to be used as the structure for the stator of an outer rotor Direct-Drive Permanent Magnet Synchronous Generator designed for high-power wind turbines is studied.
Resumo:
The partial replacement of NaCl by KCl is a promising alternative to produce a cheese with lower sodium content since KCl does not change the final quality of the cheese product. In order to assure proper salt proportions, mathematical models are employed to control the product process and simulate the multicomponent diffusion during the reduced salt cheese ripening period. The generalized Fick's Second Law is widely accepted as the primary mass transfer model within solid foods. The Finite Element Method (FEM) was used to solve the system of differential equations formed. Therefore, a NaCl and KCl multicomponent diffusion was simulated using a 20% (w/w) static brine with 70% NaCl and 30% KCl during Prato cheese (a Brazilian semi-hard cheese) salting and ripening. The theoretical results were compared with experimental data, and indicated that the deviation was 4.43% for NaCl and 4.72% for KCl validating the proposed model for the production of good quality, reduced-sodium cheeses.
Resumo:
In this study, water uptake by poultry carcasses during cooling by water immersion was modeled using artificial neural networks. Data from twenty-five independent variables and the final mass of the carcass were collected in an industrial plant to train and validate the model. Different network structures with one hidden layer were tested, and the Downhill Simplex method was used to optimize the synaptic weights. In order to accelerate the optimization calculus, Principal Component Analysis (PCA) was used to preprocess the input data. The obtained results were: i) PCA reduced the number of input variables from twenty-five to ten; ii) the neural network structure 4-6-1 was the one with the best result; iii) PCA gave the following order of importance: parameters of mass transfer, heat transfer, and initial characteristics of the carcass. The main contributions of this work were to provide an accurate model for predicting the final content of water in the carcasses and a better understanding of the variables involved.
Resumo:
The objective of this work was to determine and model the infrared dehydration curves of apple slices - Fuji and Gala varieties. The slices were dehydrated until constant mass, in a prototype dryer with infrared heating source. The applied temperatures ranged from 50 to 100 °C. Due to the physical characteristics of the product, the dehydration curve was divided in two periods, constant and falling, separated by the critical moisture content. A linear model was used to describe the constant dehydration period. Empirical models traditionally used to model the drying behavior of agricultural products were fitted to the experimental data of the falling dehydration period. Critical moisture contents of 2.811 and 3.103 kgw kgs-1 were observed for the Fuji and Gala varieties, respectively. Based on the results, it was concluded that the constant dehydration rates presented a direct relationship with the temperature; thus, it was possible to fit a model that describes the moisture content variation in function of time and temperature. Among the tested models, which describe the falling dehydration period, the model proposed by Midilli presented the best fit for all studied conditions.
Resumo:
The objective of the work is to study the flow behavior and to support the design of air cleaner by dynamic simulation.In a paper printing industry, it is necessary to monitor the quality of paper when the paper is being produced. During the production, the quality of the paper can be monitored by camera. Therefore, it is necessary to keep the camera lens clean as wood particles may fall from the paper and lie on the camera lens. In this work, the behavior of the air flow and effect of the airflow on the particles at different inlet angles are simulated. Geometries of a different inlet angles of single-channel and double-channel case were constructed using ANSYS CFD Software. All the simulations were performed in ANSYS Fluent. The simulation results of single-channel and double-channel case revealed significant differences in the behavior of the flow and the particle velocity. The main conclusion from this work are in following. 1) For the single channel case the best angle was 0 degree because in that case, the air flow can keep 60% of the particles away from the lens which would otherwise stay on lens. 2) For the double channel case, the best solution was found when the angle of the first inlet was 0 degree and the angle of second inlet was 45 degree . In that case, the airflow can keep 91% of particles away from the lens which would otherwise stay on lens.
Resumo:
In this paper, we study several tests for the equality of two unknown distributions. Two are based on empirical distribution functions, three others on nonparametric probability density estimates, and the last ones on differences between sample moments. We suggest controlling the size of such tests (under nonparametric assumptions) by using permutational versions of the tests jointly with the method of Monte Carlo tests properly adjusted to deal with discrete distributions. We also propose a combined test procedure, whose level is again perfectly controlled through the Monte Carlo test technique and has better power properties than the individual tests that are combined. Finally, in a simulation experiment, we show that the technique suggested provides perfect control of test size and that the new tests proposed can yield sizeable power improvements.
Resumo:
In this paper, we introduce a new approach for volatility modeling in discrete and continuous time. We follow the stochastic volatility literature by assuming that the variance is a function of a state variable. However, instead of assuming that the loading function is ad hoc (e.g., exponential or affine), we assume that it is a linear combination of the eigenfunctions of the conditional expectation (resp. infinitesimal generator) operator associated to the state variable in discrete (resp. continuous) time. Special examples are the popular log-normal and square-root models where the eigenfunctions are the Hermite and Laguerre polynomials respectively. The eigenfunction approach has at least six advantages: i) it is general since any square integrable function may be written as a linear combination of the eigenfunctions; ii) the orthogonality of the eigenfunctions leads to the traditional interpretations of the linear principal components analysis; iii) the implied dynamics of the variance and squared return processes are ARMA and, hence, simple for forecasting and inference purposes; (iv) more importantly, this generates fat tails for the variance and returns processes; v) in contrast to popular models, the variance of the variance is a flexible function of the variance; vi) these models are closed under temporal aggregation.
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
In this paper, a novel fast method for modeling mammograms by deterministic fractal coding approach to detect the presence of microcalcifications, which are early signs of breast cancer, is presented. The modeled mammogram obtained using fractal encoding method is visually similar to the original image containing microcalcifications, and therefore, when it is taken out from the original mammogram, the presence of microcalcifications can be enhanced. The limitation of fractal image modeling is the tremendous time required for encoding. In the present work, instead of searching for a matching domain in the entire domain pool of the image, three methods based on mean and variance, dynamic range of the image blocks, and mass center features are used. This reduced the encoding time by a factor of 3, 89, and 13, respectively, in the three methods with respect to the conventional fractal image coding method with quad tree partitioning. The mammograms obtained from The Mammographic Image Analysis Society database (ground truth available) gave a total detection score of 87.6%, 87.6%, 90.5%, and 87.6%, for the conventional and the proposed three methods, respectively.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.