932 resultados para Random field model
Resumo:
The objective of this work was to evaluate the water flow computer model, WATABLE, using experimental field observations on water table management plots from a site located near Hastings, FL, USA. The experimental field had scale drainage systems with provisions for subirrigation with buried microirrigation and conventional seepage irrigation systems. Potato (Solanum tuberosum L.) growing seasons from years 1996 and 1997 were used to simulate the hydrology of the area. Water table levels, precipitation, irrigation and runoff volumes were continuously monitored. The model simulated the water movement from a buried microirrigation line source and the response of the water table to irrigation, precipitation, evapotranspiration, and deep percolation. The model was calibrated and verified by comparing simulated results with experimental field observations. The model performed very well in simulating seasonal runoff, irrigation volumes, and water table levels during crop growth. The two-dimensional model can be used to investigate different irrigation strategies involving water table management control. Applications of the model include optimization of the water table depth for each growth stage, and duration, frequency, and rate of irrigation.
Resumo:
U-Pb dating of zircons by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) is a widely used analytical technique in Earth Sciences. For U-Pb ages below 1 billion years (1 Ga), Pb-206/U-238 dates are usually used, showing the least bias by external parameters such as the presence of initial lead and its isotopic composition in the analysed mineral. Precision and accuracy of the Pb/U ratio are thus of highest importance in LA-ICPMS geochronology. We consider the evaluation of the statistical distribution of the sweep intensities based on goodness-of-fit tests in order to find a model probability distribution fitting the data to apply an appropriate formulation for the standard deviation. We then discuss three main methods to calculate the Pb/U intensity ratio and its uncertainty in the LA-ICPMS: (1) ratio-of-the-mean intensities method, (2) mean-of-the-intensity-ratios method and (3) intercept method. These methods apply different functions to the same raw intensity vs. time data to calculate the mean Pb/U intensity ratio. Thus, the calculated intensity ratio and its uncertainty depend on the method applied. We demonstrate that the accuracy and, conditionally, the precision of the ratio-of-the-mean intensities method are invariant to the intensity fluctuations and averaging related to the dwell time selection and off-line data transformation (averaging of several sweeps); we present a statistical approach how to calculate the uncertainty of this method for transient signals. We also show that the accuracy of methods (2) and (3) is influenced by the intensity fluctuations and averaging, and the extent of this influence can amount to tens of percentage points; we show that the uncertainty of these methods also depends on how the signal is averaged. Each of the above methods imposes requirements to the instrumentation. The ratio-of-the-mean intensities method is sufficiently accurate provided the laser induced fractionation between the beginning and the end of the signal is kept low and linear. We show, based on a comprehensive series of analyses with different ablation pit sizes, energy densities and repetition rates for a 193 nm ns-ablation system that such a fractionation behaviour requires using a low ablation speed (low energy density and low repetition rate). Overall, we conclude that the ratio-of-the-mean intensities method combined with low sampling rates is the most mathematically accurate among the existing data treatment methods for U-Pb zircon dating by sensitive sector field ICPMS.
Resumo:
Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.
Resumo:
Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.
Resumo:
The objective of this work was to compare the relative efficiency of initial selection and genetic parameter estimation, using augmented blocks design (ABD), augmented blocks twice replicated design (DABD) and group of randomised block design experiments with common treatments (ERBCT), by simulations, considering fixed effect model and mixed model with regular treatment effects as random. For the simulations, eight different conditions (scenarios) were considered. From the 600 simulations in each scenario, the mean percentage selection coincidence, the Pearsons´s correlation estimates between adjusted means for the fixed effects model, and the heritability estimates for the mixed model were evaluated. DABD and ERBCT were very similar in their comparisons and slightly superior to ABD. Considering the initial stages of selection in a plant breeding program, ABD is a good alternative for selecting superior genotypes, although none of the designs had been effective to estimate heritability in all the different scenarios evaluated.
Resumo:
The relation between the low-energy constants appearing in the effective field theory description of the Lambda N -> NN transition potential and the parameters of the one-meson-exchange model previously developed is obtained. We extract the relative importance of the different exchange mechanisms included in the meson picture by means of a comparison to the corresponding operational structures appearing in the effective approach. The ability of this procedure to obtain the weak baryon-baryon-meson couplings for a possible scalar exchange is also discussed.
Resumo:
The relation between the low-energy constants appearing in the effective field theory description of the Lambda N -> NN transition potential and the parameters of the one-meson-exchange model previously developed is obtained. We extract the relative importance of the different exchange mechanisms included in the meson picture by means of a comparison to the corresponding operational structures appearing in the effective approach. The ability of this procedure to obtain the weak baryon-baryon-meson couplings for a possible scalar exchange is also discussed.
Resumo:
Transverse joints are placed in portland cement concrete pavements to control the development of random cracking due to stresses induced by moisture and thermal gradients and restrained slab movement. These joints are strengthened through the use of load transfer devices, typically dowel bars, designed to transfer load across the joint from one pavement slab to the next. Epoxy coated steel bars are the materials of choice at the present time, but have experienced some difficulties with resistance to corrosion from deicing salts. The research project investigated the use of alternative materials, dowel size and spacing to determine the benefits and limitations of each material. In this project two types of fiber composite materials, stainless steel solid dowels and epoxy coated dowels were tested for five years in side by side installation in a portion of U.S. 65 near Des Moines, Iowa, between 1997 and 2002. The work was directed at analyzing the load transfer characteristics of 8-in. vs. 12-in. spacing of the dowels and the alternative dowel materials, fiber composite (1.5- and 1.88-in. diameter) and stainless steel (1.5-in. diameter), compared to typical 1.5-in. diameter epoxy-coated steel dowels placed on 12-in. spacing. Data were collected biannually within each series of joints and variables in terms of load transfer in each lane (outer wheel path), visual distress, joint openings, and faulting in each wheel path. After five years of performance the following observations were made from the data collected. Each of the dowel materials is performing equally in terms of load transfer, joint movement and faulting. Stainless steel dowels are providing load transfer performance equal to or greater than epoxy-coated steel dowels at the end of five years. Fiber reinforced polymer (FRP) dowels of the sizes and materials tested should be spaced no greater than 8 in. apart to achieve comparable performance to epoxy coated dowels. No evidence of deterioration due to road salts was identified on any of the products tested. The relatively high cost of stainless steel solid and FRP dowels was a limitation at the time of this study conclusion. Work is continuing with the subject materials in laboratory studies to determine the proper shape, spacing, chemical composition and testing specification to make the FRP and stainless (clad or solid) dowels a viable alternative joint load transfer material for long lasting portland cement concrete pavements.
Resumo:
Recent data compiled by the National Bridge Inventory revealed 29% of Iowa's approximate 24,600 bridges were either structurally deficient or functionally obsolete. This large number of deficient bridges and the high cost of needed repairs create unique problems for Iowa and many other states. The research objective of this project was to determine the load capacity of a particular type of deteriorating bridge – the precast concrete deck bridge – which is commonly found on Iowa's secondary roads. The number of these precast concrete structures requiring load postings and/or replacement can be significantly reduced if the deteriorated structures are found to have adequate load capacity or can be reliably evaluated. Approximately 600 precast concrete deck bridges (PCDBs) exist in Iowa. A typical PCDB span is 19 to 36 ft long and consists of eight to ten simply supported precast panels. Bolts and either a pipe shear key or a grouted shear key are used to join adjacent panels. The panels resemble a steel channel in cross-section; the web is orientated horizontally and forms the roadway deck and the legs act as shallow beams. The primary longitudinal reinforcing steel bundled in each of the legs frequently corrodes and causes longitudinal cracks in the concrete and spalling. The research team performed service load tests on four deteriorated PCDBs; two with shear keys in place and two without. Conventional strain gages were used to measure strains in both the steel and concrete, and transducers were used to measure vertical deflections. Based on the field results, it was determined that these bridges have sufficient lateral load distribution and adequate strength when shear keys are properly installed between adjacent panels. The measured lateral load distribution factors are larger than AASHTO values when shear keys were not installed. Since some of the reinforcement had hooks, deterioration of the reinforcement has a minimal affect on the service level performance of the bridges when there is minimal loss of cross-sectional area. Laboratory tests were performed on the PCDB panels obtained from three bridge replacement projects. Twelve deteriorated panels were loaded to failure in a four point bending arrangement. Although the panels had significant deflections prior to failure, the experimental capacity of eleven panels exceeded the theoretical capacity. Experimental capacity of the twelfth panel, an extremely distressed panel, was only slightly below the theoretical capacity. Service tests and an ultimate strength test were performed on a laboratory bridge model consisting of four joined panels to determine the effect of various shear connection configurations. These data were used to validate a PCDB finite element model that can provide more accurate live load distribution factors for use in rating calculations. Finally, a strengthening system was developed and tested for use in situations where one or more panels of an existing PCDB need strengthening.
Resumo:
In response to the mandate on Load and Resistance Factor Design (LRFD) implementations by the Federal Highway Administration (FHWA) on all new bridge projects initiated after October 1, 2007, the Iowa Highway Research Board (IHRB) sponsored these research projects to develop regional LRFD recommendations. The LRFD development was performed using the Iowa Department of Transportation (DOT) Pile Load Test database (PILOT). To increase the data points for LRFD development, develop LRFD recommendations for dynamic methods, and validate the results of LRFD calibration, 10 full-scale field tests on the most commonly used steel H-piles (e.g., HP 10 x 42) were conducted throughout Iowa. Detailed in situ soil investigations were carried out, push-in pressure cells were installed, and laboratory soil tests were performed. Pile responses during driving, at the end of driving (EOD), and at re-strikes were monitored using the Pile Driving Analyzer (PDA), following with the CAse Pile Wave Analysis Program (CAPWAP) analysis. The hammer blow counts were recorded for Wave Equation Analysis Program (WEAP) and dynamic formulas. Static load tests (SLTs) were performed and the pile capacities were determined based on the Davisson’s criteria. The extensive experimental research studies generated important data for analytical and computational investigations. The SLT measured load-displacements were compared with the simulated results obtained using a model of the TZPILE program and using the modified borehole shear test method. Two analytical pile setup quantification methods, in terms of soil properties, were developed and validated. A new calibration procedure was developed to incorporate pile setup into LRFD.
Resumo:
In the previous study, moisture loss indices were developed based on the field measurements from one CIR-foam and one CIR-emulsion construction sites. To calibrate these moisture loss indices, additional CIR construction sites were monitored using embedded moisture and temperature sensors. In addition, to determine the optimum timing of an HMA overlay on the CIR layer, the potential of using the stiffness of CIR layer measured by geo-gauge instead of the moisture measurement by a nuclear gauge was explored. Based on the monitoring the moisture and stiffness from seven CIR project sites, the following conclusions are derived: 1. In some cases, the in-situ stiffness remained constant and, in other cases, despite some rainfalls, stiffness of the CIR layers steadily increased during the curing time. 2. The stiffness measured by geo-gauge was affected by a significant amount of rainfall. 3. The moisture indices developed for CIR sites can be used for predicting moisture level in a typical CIR project. The initial moisture content and temperature were the most significant factors in predicting the future moisture content in the CIR layer. 4. The stiffness of a CIR layer is an extremely useful tool for contractors to use for timing their HMA overlay. To determine the optimal timing of an HMA overlay, it is recommended that the moisture loss index should be used in conjunction with the stiffness of the CIR layer.
Resumo:
The objective of this study was to adapt a nonlinear model (Wang and Engel - WE) for simulating the phenology of maize (Zea mays L.), and to evaluate this model and a linear one (thermal time), in order to predict developmental stages of a field-grown maize variety. A field experiment, during 2005/2006 and 2006/2007 was conducted in Santa Maria, RS, Brazil, in two growing seasons, with seven sowing dates each. Dates of emergence, silking, and physiological maturity of the maize variety BRS Missões were recorded in six replications in each sowing date. Data collected in 2005/2006 growing season were used to estimate the coefficients of the two models, and data collected in the 2006/2007 growing season were used as independent data set for model evaluations. The nonlinear WE model accurately predicted the date of silking and physiological maturity, and had a lower root mean square error (RMSE) than the linear (thermal time) model. The overall RMSE for silking and physiological maturity was 2.7 and 4.8 days with WE model, and 5.6 and 8.3 days with thermal time model, respectively.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) of imatinib has been increasingly proposed for chronic myeloid leukaemia (CML) patients, as several studies have found a correlation between trough concentrations (Cmin) >=1000ng/ml and improved response. The pharmacological monitoring project of EUTOS (European Treatment and Outcome Study) was launched to increase the availability of imatinib TDM, standardize labs, and validate proposed Cmin thresholds. Using the collected data, the objective of this analysis was to characterize imatinib Population pharmacokinetics (Pop-PK) in a large cohort of European patients, to quantify its variability and the influence of demographic factors and comedications, and to derive individual exposure variables suitable for further concentration-effect analyses.¦Methods: 4095 PK samples from 2478 adult patients were analyzed between 2006 and 2010 by LC-MS-MS and considered for Pop-PK analysis by NONMEM®. Model building used data from 973 patients with >=2 samples available (2590 samples). A sensitivity analysis was performed using all data. Available comedications (27%) were classified into inducers or inhibitors of P-glycoprotein, CYP3A4/5 and organic-cation-transporter-1 (hOCT-1).¦Results: A one-compartment model with linear elimination, zero-order absorption fitted the data best. Estimated Pop-PK parameters (interindividual variability, IIV %CV) for a 40-year old male patient were: clearance CL = 17.3 L/h (37.7%), volume V = 429L (51.1%), duration of absorption D1 = 3.2h. Outliers, reflecting potential compliance and time recording errors, were taken into account by estimating an IIV on the residual error (35.4%). Intra-individual residuals were 29.1% (proportional) plus ± 84.6 ng/mL (additive). Female patients had a 15.2% lower CL (14.6 L/h). A piece-wise linear effect of age estimated a CL of 18.7 L/h at 20 years, 17.3 L/h at 40 and 13.8 L/h at 60 years. These covariates explained 2% (CL) and 4.5% (V) of IIV variability. No effect of comedication was found. The sensitivity analysis expectedly estimated increased IIV, but similar fixed effect parameters.¦Conclusion: Imatinib PK was well described in a large cohort of CML patients under field conditions and results were concordant with previous studies. Patient characteristics explain only little IIV, confirming limited utility of prior dosage adjustment. As intra-variability is smaller than inter-patient variability, dose adjustment guided by TDM could however be beneficial in order to bring Cmin into a given therapeutic target.
Resumo:
When conducting research in different cultural settings, assessing measurement equivalence is of prime importance to determine if constructs and scores can be compared across groups. Structural equivalence implies that constructs have the same meaning across groups, metric equivalence implies that the metric of the scales remains stable across groups, and full scale or scalar equivalence implies that the origin of the scales is the same across groups. Several studies have observed that the structure underlying both normal personality and personality disorders (PDs) is stable across cultures. Most of this cross-cultural research was conducted in Western and Asian cultures. In Africa, the few studies were conducted with well-educated participants using French or English instruments. No research was conducted in Africa with less privileged or preliterate samples. The aim of this research was to study the structure and expression of normal and abnormal personality in an urban and a rural sample in Burkina Faso. The sample included 1,750 participants, with a sub-sample from the urban area of Ouagadougou (n = 1,249) and another sub-sample from a rural village, Soumiaga (n = 501). Most participants answered an interview consisting of a Mooré language adaptation of the Revised NEO Personality Inventory and of the International Personality Disorders Examination. Mooré is the language of the Mossi ethnic group, and the most frequently spoken local language in Burkina Faso. A sub-sample completed the same self-report instruments in French. Demographic variables only had a small impact on normal and abnormal personality traits mean levels. The structure underlying normal personality was unstable across regions and languages, illustrating that translating a complex psychological inventory into a native African language is a very difficult task. The structure underlying abnormal personality and the metric of PDs scales were stable across regions. As scalar equivalence was not reached, mean differences cannot be interpreted. Nevertheless, these differences could be due to an exaggerated expression of abnormal traits valued in the two cultural settings. Our results suggest that studies using a different methodology should be conducted to understand what is considered, in different cultures, as deviating from the expectations of the individual's culture, and as a significant impairment in self and interpersonal functioning, as defined by the DSM-5.
Resumo:
We present a model in which particles (or individuals of a biological population) disperse with a rest time between consecutive motions (or migrations) which may take several possible values from a discrete set. Particles (or individuals) may also react (or reproduce). We derive a new equation for the effective rest time T˜ of the random walk. Application to the neolithic transition in Europe makes it possible to derive more realistic theoretical values for its wavefront speed than those following from the single-delayed framework presented previously [J. Fort and V. Méndez, Phys. Rev. Lett. 82, 867 (1999)]. The new results are consistent with the archaeological observations of this important historical process