884 resultados para Stochastic SIS logistic model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dose-finding designs estimate the dose level of a drug based on observed adverse events. Relatedness of the adverse event to the drug has been generally ignored in all proposed design methodologies. These designs assume that the adverse events observed during a trial are definitely related to the drug, which can lead to flawed dose-level estimation. We incorporate adverse event relatedness into the so-called continual reassessment method. Adverse events that have ‘doubtful’ or ‘possible’ relationships to the drug are modelled using a two-parameter logistic model with an additive probability mass. Adverse events ‘probably’ or ‘definitely’ related to the drug are modelled using a cumulative logistic model. To search for the maximum tolerated dose, we use the maximum estimated toxicity probability of these two adverse event relatedness categories. We conduct a simulation study that illustrates the characteristics of the design under various scenarios. This article demonstrates that adverse event relatedness is important for improved dose estimation. It opens up further research pathways into continual reassessment design methodologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduced predators can have pronounced effects on naïve prey species; thus, predator control is often essential for conservation of threatened native species. Complete eradication of the predator, although desirable, may be elusive in budget-limited situations, whereas predator suppression is more feasible and may still achieve conservation goals. We used a stochastic predator-prey model based on a Lotka-Volterra system to investigate the cost-effectiveness of predator control to achieve prey conservation. We compared five control strategies: immediate eradication, removal of a constant number of predators (fixed-number control), removal of a constant proportion of predators (fixed-rate control), removal of predators that exceed a predetermined threshold (upper-trigger harvest), and removal of predators whenever their population falls below a lower predetermined threshold (lower-trigger harvest). We looked at the performance of these strategies when managers could always remove the full number of predators targeted by each strategy, subject to budget availability. Under this assumption immediate eradication reduced the threat to the prey population the most. We then examined the effect of reduced management success in meeting removal targets, assuming removal is more difficult at low predator densities. In this case there was a pronounced reduction in performance of the immediate eradication, fixed-number, and lower-trigger strategies. Although immediate eradication still yielded the highest expected minimum prey population size, upper-trigger harvest yielded the lowest probability of prey extinction and the greatest return on investment (as measured by improvement in expected minimum population size per amount spent). Upper-trigger harvest was relatively successful because it operated when predator density was highest, which is when predator removal targets can be more easily met and the effect of predators on the prey is most damaging. This suggests that controlling predators only when they are most abundant is the "best" strategy when financial resources are limited and eradication is unlikely. © 2008 Society for Conservation Biology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clinical trials have shown that weight reduction with lifestyles can delay or prevent diabetes and reduce blood pressure. An appropriate definition of obesity using anthropometric measures is useful in predicting diabetes and hypertension at the population level. However, there is debate on which of the measures of obesity is best or most strongly associated with diabetes and hypertension and on what are the optimal cut-off values for body mass index (BMI) and waist circumference (WC) in this regard. The aims of the study were 1) to compare the strength of the association for undiagnosed or newly diagnosed diabetes (or hypertension) with anthropometric measures of obesity in people of Asian origin, 2) to detect ethnic differences in the association of undiagnosed diabetes with obesity, 3) to identify ethnic- and sex-specific change point values of BMI and WC for changes in the prevalence of diabetes and 4) to evaluate the ethnic-specific WC cutoff values proposed by the International Diabetes Federation (IDF) in 2005 for central obesity. The study population comprised 28 435 men and 35 198 women, ≥ 25 years of age, from 39 cohorts participating in the DECODA and DECODE studies, including 5 Asian Indian (n = 13 537), 3 Mauritian Indian (n = 4505) and Mauritian Creole (n = 1075), 8 Chinese (n =10 801), 1 Filipino (n = 3841), 7 Japanese (n = 7934), 1 Mongolian (n = 1991), and 14 European (n = 20 979) studies. The prevalence of diabetes, hypertension and central obesity was estimated, using descriptive statistics, and the differences were determined with the χ2 test. The odds ratios (ORs) or  coefficients (from the logistic model) and hazard ratios (HRs, from the Cox model to interval censored data) for BMI, WC, waist-to-hip ratio (WHR), and waist-to-stature ratio (WSR) were estimated for diabetes and hypertension. The differences between BMI and WC, WHR or WSR were compared, applying paired homogeneity tests (Wald statistics with 1 df). Hierarchical three-level Bayesian change point analysis, adjusting for age, was applied to identify the most likely cut-off/change point values for BMI and WC in association with previously undiagnosed diabetes. The ORs for diabetes in men (women) with BMI, WC, WHR and WSR were 1.52 (1.59), 1.54 (1.70), 1.53 (1.50) and 1.62 (1.70), respectively and the corresponding ORs for hypertension were 1.68 (1.55), 1.66 (1.51), 1.45 (1.28) and 1.63 (1.50). For diabetes the OR for BMI did not differ from that for WC or WHR, but was lower than that for WSR (p = 0.001) in men while in women the ORs were higher for WC and WSR than for BMI (both p < 0.05). Hypertension was more strongly associated with BMI than with WHR in men (p < 0.001) and most strongly with BMI than with WHR (p < 0.001), WSR (p < 0.01) and WC (p < 0.05) in women. The HRs for incidence of diabetes and hypertension did not differ between BMI and the other three central obesity measures in Mauritian Indians and Mauritian Creoles during follow-ups of 5, 6 and 11 years. The prevalence of diabetes was highest in Asian Indians, lowest in Europeans and intermediate in others, given the same BMI or WC category. The  coefficients for diabetes in BMI (kg/m2) were (men/women): 0.34/0.28, 0.41/0.43, 0.42/0.61, 0.36/0.59 and 0.33/0.49 for Asian Indian, Chinese, Japanese, Mauritian Indian and European (overall homogeneity test: p > 0.05 in men and p < 0.001 in women). Similar results were obtained in WC (cm). Asian Indian women had lower  coefficients than women of other ethnicities. The change points for BMI were 29.5, 25.6, 24.0, 24.0 and 21.5 in men and 29.4, 25.2, 24.9, 25.3 and 22.5 (kg/m2) in women of European, Chinese, Mauritian Indian, Japanese, and Asian Indian descent. The change points for WC were 100, 85, 79 and 82 cm in men and 91, 82, 82 and 76 cm in women of European, Chinese, Mauritian Indian, and Asian Indian. The prevalence of central obesity using the 2005 IDF definition was higher in Japanese men but lower in Japanese women than in their Asian counterparts. The prevalence of central obesity was 52 times higher in Japanese men but 0.8 times lower in Japanese women compared to the National Cholesterol Education Programme definition. The findings suggest that both BMI and WC predicted diabetes and hypertension equally well in all ethnic groups. At the same BMI or WC level, the prevalence of diabetes was highest in Asian Indians, lowest in Europeans and intermediate in others. Ethnic- and sex-specific change points of BMI and WC should be considered in setting diagnostic criteria for obesity to detect undiagnosed or newly diagnosed diabetes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Relaxation labeling processes are a class of mechanisms that solve the problem of assigning labels to objects in a manner that is consistent with respect to some domain-specific constraints. We reformulate this using the model of a team of learning automata interacting with an environment or a high-level critic that gives noisy responses as to the consistency of a tentative labeling selected by the automata. This results in an iterative linear algorithm that is itself probabilistic. Using an explicit definition of consistency we give a complete analysis of this probabilistic relaxation process using weak convergence results for stochastic algorithms. Our model can accommodate a range of uncertainties in the compatibility functions. We prove a local convergence result and show that the point of convergence depends both on the initial labeling and the constraints. The algorithm is implementable in a highly parallel fashion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecular motors are proteins that convert chemical energy into mechanical work. The viral packaging ATPase P4 is a hexameric molecular motor that translocates RNA into preformed viral capsids. P4 belongs to the ubiquitous class of hexameric helicases. Although its structure is known, the mechanism of RNA translocation remains elusive. Here we present a detailed kinetic study of nucleotide binding, hydrolysis, and product release by P4. We propose a stochastic-sequential cooperative model to describe the coordination of ATP hydrolysis within the hexamer. In this model the apparent cooperativity is a result of hydrolysis stimulation by ATP and RNA binding to neighboring subunits rather than cooperative nucleotide binding. Simultaneous interaction of neighboring subunits with RNA makes the otherwise random hydrolysis sequential and processive. Further, we use hydrogen/deuterium exchange detected by high resolution mass spectrometry to visualize P4 conformational dynamics during the catalytic cycle. Concerted changes of exchange kinetics reveal a cooperative unit that dynamically links ATP binding sites and the central RNA binding channel. The cooperative unit is compatible with the structure-based model in which translocation is effected by conformational changes of a limited protein region. Deuterium labeling also discloses the transition state associated with RNA loading which proceeds via opening of the hexameric ring. Hydrogen/deuterium exchange is further used to delineate the interactions of the P4 hexamer with the viral procapsid. P4 associates with the procapsid via its C-terminal face. The interactions stabilize subunit interfaces within the hexamer. The conformation of the virus-bound hexamer is more stable than the hexamer in solution, which is prone to spontaneous ring openings. We propose that the stabilization within the viral capsid increases the packaging processivity and confers selectivity during RNA loading. Finally, we use single molecule techniques to characterize P4 translocation along RNA. While the P4 hexamer encloses RNA topologically within the central channel, it diffuses randomly along the RNA. In the presence of ATP, unidirectional net movement is discernible in addition to the stochastic motion. The diffusion is hindered by activation energy barriers that depend on the nucleotide binding state. The results suggest that P4 employs an electrostatic clutch instead of cycling through stable, discrete, RNA binding states during translocation. Conformational changes coupled to ATP hydrolysis modify the electrostatic potential inside the central channel, which in turn biases RNA motion in one direction. Implications of the P4 model for other hexameric molecular motors are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider robust joint linear precoder/receive filter designs for multiuser multi-input multi-output (MIMO) downlink that minimize the sum mean square error (SMSE) in the presence of imperfect channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. We consider a stochastic error (SE) model and a norm-bounded error (NBE) model for the CSIT error. In the case of CSIT error following SE model, we compute the desired downlink precoder/receive filter matrices by solving the simpler uplink problem by exploiting the uplink-downlink duality for the MSE region. In the case of the CSIT error following the NBE model, we consider the worst-case SMSE as the objective function, and propose an iterative algorithm for the robust transceiver design. The robustness of the proposed algorithms to imperfections in CSIT is illustrated through simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lahopuun määrästä ja sijoittumisesta ollaan kiinnostuneita paitsi elinympäristöjen monimuotoisuuden, myös ilmakehän hiilen varastoinnin kannalta. Tutkimuksen tavoitteena oli kehittää aluepohjainen laserkeilausdataa hyödyntävä malli lahopuukohteiden paikantamiseksi ja lahopuun määrän estimoimiseksi. Samalla tutkittiin mallin selityskyvyn muuttumista mallinnettavan ruudun kokoa suurennettaessa. Tutkimusalue sijaitsi Itä-Suomessa Sonkajärvellä ja koostui pääasiassa nuorista hoidetuista talousmetsistä. Tutkimuksessa käytettiin harvapulssista laserkeilausdataa sekä kaistoittain mitattua maastodataa kuolleesta puuaineksesta. Aineisto jaettiin siten, että neljäsosa datasta oli käytössä mallinnusta varten ja loput varattiin valmiiden mallien testaamiseen. Lahopuun mallintamisessa käytettiin sekä parametrista että ei-parametrista mallinnusmenetelmää. Logistisen regression avulla erikokoisille (0,04, 0,20, 0,32, 0,52 ja 1,00 ha) ruuduille ennustettiin todennäköisyys lahopuun esiintymiselle. Muodostettujen mallien selittävät muuttujat valittiin 80 laserpiirteen ja näiden muunnoksien joukosta. Mallien selittävät muuttujat valittiin kolmessa vaiheessa. Aluksi muuttujia tarkasteltiin visuaalisesti kuvaamalla ne lahopuumäärän suhteen. Ensimmäisessä vaiheessa sopivimmiksi arvioitujen muuttujien selityskykyä testattiin mallinnuksen toisessa vaiheessa yhden muuttujan mallien avulla. Lopullisessa usean muuttujan mallissa selittävien muuttujien kriteerinä oli tilastollinen merkitsevyys 5 % riskitasolla. 0,20 hehtaarin ruutukoolle luotu malli parametrisoitiin muun kokoisille ruuduille. Logistisella regressiolla toteutetun parametrisen mallintamisen lisäksi, 0,04 ja 1,0 hehtaarin ruutukokojen aineistot luokiteltiin ei-parametrisen CART-mallinnuksen (Classification and Regression Trees) avulla. CARTmenetelmällä etsittiin aineistosta vaikeasti havaittavia epälineaarisia riippuvuuksia laserpiirteiden ja lahopuumäärän välillä. CART-luokittelu tehtiin sekä lahopuustoisuuden että lahopuutilavuuden suhteen. CART-luokituksella päästiin logistista regressiota parempiin tuloksiin ruutujen luokituksessa lahopuustoisuuden suhteen. Logistisella mallilla tehty luokitus parani ruutukoon suurentuessa 0,04 ha:sta(kappa 0,19) 0,32 ha:iin asti (kappa 0,38). 0,52 ha:n ruutukoolla luokituksen kappa-arvo kääntyi laskuun (kappa 0,32) ja laski edelleen hehtaarin ruutukokoon saakka (kappa 0,26). CART-luokitus parani ruutukoon kasvaessa. Luokitustulokset olivat logistista mallinnusta parempia sekä 0,04 ha:n (kappa 0,24) että 1,0 ha:n (kappa 0,52) ruutukoolla. CART-malleilla määritettyjen ruutukohtaisten lahopuutilavuuksien suhteellinen RMSE pieneni ruutukoon kasvaessa. 0,04 hehtaarin ruutukoolla koko aineiston lahopuumäärän suhteellinen RMSE oli 197,1 %, kun hehtaarin ruutukoolla vastaava luku oli 120,3 %. Tämän tutkimuksen tulosten perusteella voidaan todeta, että maastossa mitatun lahopuumäärän ja tutkimuksessa käytettyjen laserpiirteiden yhteys on pienellä ruutukoolla hyvin heikko, mutta vahvistuu hieman ruutukoon kasvaessa. Kun mallinnuksessa käytetty ruutukoko kasvaa, pienialaisten lahopuukeskittymien havaitseminen kuitenkin vaikeutuu. Tutkimuksessa kohteen lahopuustoisuus pystyttiin kartoittamaan kohtuullisesti suurella ruutukoolla, mutta pienialaisten kohteiden kartoittaminen ei onnistunut käytetyillä menetelmillä. Pienialaisten kohteiden paikantaminen laserkeilauksen avulla edellyttää jatkotutkimusta erityisesti tiheäpulssisen laserdatan käytöstä lahopuuinventoinneissa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation develops a strategic management accounting perspective of inventory routing. The thesis studies the drivers of cost efficiency gains by identifying the role of the underlying cost structure, demand, information sharing, forecasting accuracy, service levels, vehicle fleet, planning horizon and other strategic factors as well as the interaction effects among these factors with respect to performance outcomes. The task is to enhance the knowledge of the strategic situations that favor the implementation of inventory routing systems, understanding cause-and-effect relationships, linkages and gaining a holistic view of the value proposition of inventory routing. The thesis applies an exploratory case study design, which is based on normative quantitative empirical research using optimization, simulation and factor analysis. Data and results are drawn from a real world application to cash supply chains. The first research paper shows that performance gains require a common cost component and cannot be explained by simple linear or affine cost structures. Inventory management and distribution decisions become separable in the absence of a set-dependent cost structure, and neither economies of scope nor coordination problems are present in this case. The second research paper analyzes whether information sharing improves the overall forecasting accuracy. Analysis suggests that the potential for information sharing is limited to coordination of replenishments and that central information do not yield more accurate forecasts based on joint forecasting. The third research paper develops a novel formulation of the stochastic inventory routing model that accounts for minimal service levels and forecasting accuracy. The developed model allows studying the interaction of minimal service levels and forecasting accuracy with the underlying cost structure in inventory routing. Interestingly, results show that the factors minimal service level and forecasting accuracy are not statistically significant, and subsequently not relevant for the strategic decision problem to introduce inventory routing, or in other words, to effectively internalize inventory management and distribution decisions at the supplier. Consequently the main contribution of this thesis is the result that cost benefits of inventory routing are derived from the joint decision model that accounts for the underlying set-dependent cost structure rather than the level of information sharing. This result suggests that the value of information sharing of demand and inventory data is likely to be overstated in prior literature. In other words, cost benefits of inventory routing are primarily determined by the cost structure (i.e. level of fixed costs and transportation costs) rather than the level of information sharing, joint forecasting, forecasting accuracy or service levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider robust joint designs of relay precoder and destination receive filters in a nonregenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a MIMO-relay node. The channel state information (CSI) available at the relay node is assumed to be imperfect. We consider robust designs for two models of CSI error. The first model is a stochastic error (SE) model, where the probability distribution of the CSI error is Gaussian. This model is applicable when the imperfect CSI is mainly due to errors in channel estimation. For this model, we propose robust minimum sum mean square error (SMSE), MSE-balancing, and relay transmit power minimizing precoder designs. The next model for the CSI error is a norm-bounded error (NBE) model, where the CSI error can be specified by an uncertainty set. This model is applicable when the CSI error is dominated by quantization errors. In this case, we adopt a worst-case design approach. For this model, we propose a robust precoder design that minimizes total relay transmit power under constraints on MSEs at the destination nodes. We show that the proposed robust design problems can be reformulated as convex optimization problems that can be solved efficiently using interior-point methods. We demonstrate the robust performance of the proposed design through simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the development and application of a stochastic dynamic programming model with fuzzy state variables for irrigation of multiple crops. A fuzzy stochastic dynamic programming (FSDP) model is developed in which the reservoir storage and soil moisture of the crops are considered as fuzzy numbers, and the reservoir inflow is considered as a stochastic variable. The model is formulated with an objective of minimizing crop yield deficits, resulting in optimal water allocations to the crops by maintaining storage continuity and soil moisture balance. The standard fuzzy arithmetic method is used to solve all arithmetic equations with fuzzy numbers, and the fuzzy ranking method is used to compare two or more fuzzy numbers. The reservoir operation model is integrated with a daily-based water allocation model, which results in daily temporal variations of allocated water, soil moisture, and crop deficits. A case study of an existing Bhadra reservoir in Karnataka, India, is chosen for the model application. The FSDP is a more realistic model because it considers the uncertainty in discretization of state variables. The results obtained using the FSDP model are found to be more acceptable for the case study than those of the classical stochastic dynamic model and the standard operating model, in terms of 10-day releases from the reservoir and evapotranspiration deficit. (C) 2015 American Society of Civil Engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Observations on maturation stages of nineteen species of economically important finfish off the Northeast coast of the USA were analyzed to examine relationships between fish size or age, and maturity. Maturation schedules and median lengths (L50) and ages (A50) at maturation were derived by fitting the logistic model to the observed proportions. Analyses were generally restricted to observations from 1985 to 1990 obtained during stratified random bottom trawl surveys conducted in spring and autumn by the Northeast Fisheries Science Center and the Commonwealth of Massachusetts Division of Marine Fisheries in waters of the continental shelf from Nova Scotia to Cape Hatteras, North Carolina. Butterfish, Peprilus triacanthus, attained sexual maturity at the smallest median length (11.4 cm, males) and pollock, Pollachius virens, at the highest (41.8 em, males). Median length at maturity for gadiforms ranged from 22.2 to 41.8 em. Within the pleuronectiforms, median length at maturity ranged from 19.1 to 30.4 cm. Median lengths for the pelagic and miscellaneous demersal species were in the same ranges as the pleuronectiforms. Butterfish also attained sexual maturity at the youngest median age (0.9 yr, both sexes) whereas redfish, Sebastes fasciatus, were the latest to mature (5.5 yr, both sexes). For gadids, the median age at maturity ranged from 1.3 to 2.3 yr. Within the pleuronectiforms, median age at maturity ranged from 1.3 to 4.4 yr and, for pelagic species, from 0.9 to 3.0 yr. Median lengths and ages for many species are lower than those reported in earlier studies of the same general region of the Northwest Atlantic. (PDF file contains 72 pages.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-eddy simulation (LES) has emerged as a promising tool for simulating turbulent flows in general and, in recent years,has also been applied to the particle-laden turbulence with some success (Kassinos et al., 2007). The motion of inertial particles is much more complicated than fluid elements, and therefore, LES of turbulent flow laden with inertial particles encounters new challenges. In the conventional LES, only large-scale eddies are explicitly resolved and the effects of unresolved, small or subgrid scale (SGS) eddies on the large-scale eddies are modeled. The SGS turbulent flow field is not available. The effects of SGS turbulent velocity field on particle motion have been studied by Wang and Squires (1996), Armenio et al. (1999), Yamamoto et al. (2001), Shotorban and Mashayek (2006a,b), Fede and Simonin (2006), Berrouk et al. (2007), Bini and Jones (2008), and Pozorski and Apte (2009), amongst others. One contemporary method to include the effects of SGS eddies on inertial particle motions is to introduce a stochastic differential equation (SDE), that is, a Langevin stochastic equation to model the SGS fluid velocity seen by inertial particles (Fede et al., 2006; Shotorban and Mashayek, 2006a; Shotorban and Mashayek, 2006b; Berrouk et al., 2007; Bini and Jones, 2008; Pozorski and Apte, 2009).However, the accuracy of such a Langevin equation model depends primarily on the prescription of the SGS fluid velocity autocorrelation time seen by an inertial particle or the inertial particle–SGS eddy interaction timescale (denoted by $\delt T_{Lp}$ and a second model constant in the diffusion term which controls the intensity of the random force received by an inertial particle (denoted by C_0, see Eq. (7)). From the theoretical point of view, dTLp differs significantly from the Lagrangian fluid velocity correlation time (Reeks, 1977; Wang and Stock, 1993), and this carries the essential nonlinearity in the statistical modeling of particle motion. dTLp and C0 may depend on the filter width and particle Stokes number even for a given turbulent flow. In previous studies, dTLp is modeled either by the fluid SGS Lagrangian timescale (Fede et al., 2006; Shotorban and Mashayek, 2006b; Pozorski and Apte, 2009; Bini and Jones, 2008) or by a simple extension of the timescale obtained from the full flow field (Berrouk et al., 2007). In this work, we shall study the subtle and on-monotonic dependence of $\delt T_{Lp}$ on the filter width and particle Stokes number using a flow field obtained from Direct Numerical Simulation (DNS). We then propose an empirical closure model for $\delta T_{Lp}$. Finally, the model is validated against LES of particle-laden turbulence in predicting single-particle statistics such as particle kinetic energy. As a first step, we consider the particle motion under the one-way coupling assumption in isotropic turbulent flow and neglect the gravitational settling effect. The one-way coupling assumption is only valid for low particle mass loading.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Growth is one of the most important characteristics of cultured species. The objective of this study was to determine the fitness of linear, log linear, polynomial, exponential and Logistic functions to the growth curves of Macrobrachium rosenbergii obtained by using weekly records of live weight, total length, head length, claw length, and last segment length from 20 to 192 days of age. The models were evaluated according to the coefficient of determination (R2), and error sum off square (ESS) and helps in formulating breeders in selective breeding programs. Twenty full-sib families consisting 400 PLs each were stocked in 20 different hapas and reared till 8 weeks after which a total of 1200 animals were transferred to earthen ponds and reared up to 192 days. The R2 values of the models ranged from 56 – 96 in case of overall body weight with logistic model being the highest. The R2 value for total length ranged from 62 to 90 with logistic model being the highest. In case of head length, the R2 value ranged between 55 and 95 with logistic model being the highest. The R2 value for claw length ranged from 44 to 94 with logistic model being the highest. For last segment length, R2 value ranged from 55 – 80 with polynomial model being the highest. However, the log linear model registered low ESS value followed by linear model for overall body weight while exponential model showed low ESS value followed by log linear model in case of head length. For total length the low ESS value was given by log linear model followed by logistic model and for claw length exponential model showed low ESS value followed by log linear model. In case of last segment length, linear model showed lowest ESS value followed by log linear model. Since, the model that shows highest R2 value with low ESS value is generally considered as the best fit model. Among the five models tested, logistic model, log linear model and linear models were found to be the best models for overall body weight, total length and head length respectively. For claw length and last segment length, log linear model was found to be the best model. These models can be used to predict growth rates in M. rosenbergii. However, further studies need to be conducted with more growth traits taken into consideration

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The age and growth dynamics of the spinner shark (Carcharhinus brevipinna) in the northwest Atlantic Ocean off the southeast United States and in the Gulf of Mexico were examined and four growth models were used to examine variation in the ability to fit size-at-age data. The von Bertalanffy growth model, an alternative equation of the von Bertalanffy growth model with a size-at-birth intercept, the Gompertz growth model, and a logistic model were fitted to sex-specific observed size-at-age data. Considering the statistical criteria (e.g., lowest mean square error [MSE], high coefficient-of-determination, and greatest level of significance) we desired for this study, the logistic model provided the best overall fit to the size-at-age data, whereas the von Bertalanffy growth model gave the worst. For “biological validity,” the von Bertalanffy model for female sharks provided estimates similar to those reported in other studies. However, the von Bertalanffy model was deemed inappropriate for describing the growth of male spinner sharks because estimates of theoretical maximum size (L∞) indicated a size much larger than that observed in the field. However, the growth coefficient (k= 0.14/yr) from the Gompertz model provided an estimate most similar to that reported for other large coastal species. The analysis of growth for spinner shark in the present study demonstrates the importance of fitting alternative models when standard models fit the data poorly or when growth estimates do not appear to be realistic.