910 resultados para Event-based Model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We construct a utility-based model of fluctuations, with nominal rigidities andunemployment, and draw its implications for the unemployment-inflation trade-off and for the conduct of monetary policy.We proceed in two steps. We first leave nominal rigidities aside. We show that,under a standard utility specification, productivity shocks have no effect onunemployment in the constrained efficient allocation. We then focus on theimplications of alternative real wage setting mechanisms for fluctuations in un-employment. We show the role of labor market frictions and real wage rigiditiesin determining the effects of productivity shocks on unemployment.We then introduce nominal rigidities in the form of staggered price setting byfirms. We derive the relation between inflation and unemployment and discusshow it is influenced by the presence of labor market frictions and real wagerigidities. We show the nature of the tradeoff between inflation and unemployment stabilization, and its dependence on labor market characteristics. We draw the implications for optimal monetary policy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents and estimates a dynamic choice model in the attribute space considering rational consumers. In light of the evidence of several state-dependence patterns, the standard attribute-based model is extended by considering a general utility function where pure inertia and pure variety-seeking behaviors can be explained in the model as particular linear cases. The dynamics of the model are fully characterized by standard dynamic programming techniques. The model presents a stationary consumption pattern that can be inertial, where the consumer only buys one product, or a variety-seeking one, where the consumer shifts among varied products.We run some simulations to analyze the consumption paths out of the steady state. Underthe hybrid utility assumption, the consumer behaves inertially among the unfamiliar brandsfor several periods, eventually switching to a variety-seeking behavior when the stationary levels are approached. An empirical analysis is run using scanner databases for three different product categories: fabric softener, saltine cracker, and catsup. Non-linear specifications provide the best fit of the data, as hybrid functional forms are found in all the product categories for most attributes and segments. These results reveal the statistical superiority of the non-linear structure and confirm the gradual trend to seek variety as the level of familiarity with the purchased items increases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Soil Nitrogen Availability Predictor (SNAP) model predicts daily and annual rates of net N mineralization (NNM) based on daily weather measurements, daily predictions of soil water and soil temperature, and on temperature and moisture modifiers obtained during aerobic incubation (basal rate). The model was based on in situ measurements of NNM in Australian soils under temperate climate. The purpose of this study was to assess this model for use in tropical soils under eucalyptus plantations in São Paulo State, Brazil. Based on field incubations for one month in three, NNM rates were measured at 11 sites (0-20 cm layer) for 21 months. The basal rate was determined in in situ incubations during moist and warm periods (January to March). Annual rates of 150-350 kg ha-1 yr-1 NNM predicted by the SNAP model were reasonably accurate (R2 = 0.84). In other periods, at lower moisture and temperature, NNM rates were overestimated. Therefore, if used carefully, the model can provide adequate predictions of annual NNM and may be useful in practical applications. For NNM predictions for shorter periods than a year or under suboptimal incubation conditions, the temperature and moisture modifiers need to be recalibrated for tropical conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and predicted behavior of the bridge caused under a subset of ambient trucks. The predicted behavior is derived from a statistics-based model trained with field data from the undamaged bridge (not a finite element model). The differences between actual and predicted responses, called residuals, are then used to construct control charts, which compare undamaged and damaged structure data. Validation of the damage-detection approach was achieved by using sacrificial specimens that were mounted to the bridge and exposed to ambient traffic loads and which simulated actual damage-sensitive locations. Different damage types and levels were introduced to the sacrificial specimens to study the sensitivity and applicability. The damage-detection algorithm was able to identify damage, but it also had a high false-positive rate. An evaluation of the sub-components of the damage-detection methodology and methods was completed for the purpose of improving the approach. Several of the underlying assumptions within the algorithm were being violated, which was the source of the false-positives. Furthermore, the lack of an automatic evaluation process was thought to potentially be an impediment to widespread use. Recommendations for the improvement of the methodology were developed and preliminarily evaluated. These recommendations are believed to improve the efficacy of the damage-detection approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present an agent-based model with the aim of studying how macro-level dynamics of spatial distances among interacting individuals in a closed space emerge from micro-level dyadic and local interactions. Our agents moved on a lattice (referred to as a room) using a model implemented in a computer program called P-Space in order to minimize their dissatisfaction, defined as a function of the discrepancy between the real distance and the ideal, or desired, distance between agents. Ideal distances evolved in accordance with the agent's personal and social space, which changed throughout the dynamics of the interactions among the agents. In the first set of simulations we studied the effects of the parameters of the function that generated ideal distances, and in a second set we explored how group macrolevel behavior depended on model parameters and other variables. We learned that certain parameter values yielded consistent patterns in the agents' personal and social spaces, which in turn led to avoidance and approaching behaviors in the agents. We also found that the spatial behavior of the group of agents as a whole was influenced by the values of the model parameters, as well as by other variables such as the number of agents. Our work demonstrates that the bottom-up approach is a useful way of explaining macro-level spatial behavior. The proposed model is also shown to be a powerful tool for simulating the spatial behavior of groups of interacting individuals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With over 68 thousand miles of gravel roads in Iowa and the importance of these roads within the farm-to-market transportation system, proper water management becomes critical for maintaining the integrity of the roadway materials. However, the build-up of water within the aggregate subbase can lead to frost boils and ultimately potholes forming at the road surface. The aggregate subbase and subgrade soils under these gravel roads are produced with material opportunistically chosen from local sources near the site and, many times, the compositions of these sublayers are far from ideal in terms of proper water drainage with the full effects of this shortcut not being fully understood. The primary objective of this project was to provide a physically-based model for evaluating the drainability of potential subbase and subgrade materials for gravel roads in Iowa. The Richards equation provided the appropriate framework to study the transient unsaturated flow that usually occurs through the subbase and subgrade of a gravel road. From which, we identified that the saturated hydraulic conductivity, Ks, was a key parameter driving the time to drain of subgrade soils found in Iowa, thus being a good proxy variable for accessing roadway drainability. Using Ks, derived from soil texture, we were able to identify potential problem areas in terms of roadway drainage . It was found that there is a threshold for Ks of 15 cm/day that determines if the roadway will drain efficiently, based on the requirement that the time to drain, Td, the surface roadway layer does not exceed a 2-hr limit. Two of the three highest abundant textures (loam and silty clay loam), which cover nearly 60% of the state of Iowa, were found to have average Td values greater than the 2-hr limit. With such a large percentage of the state at risk for the formation of boils due to the soil with relatively low saturated hydraulic conductivity values, it seems pertinent that we propose alternative design and/or maintenance practices to limit the expensive repair work in Iowa. The addition of drain tiles or French mattresses my help address drainage problems. However, before pursuing this recommendation, a comprehensive cost-benefit analysis is needed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Steep mountain catchments typically experience large sediment pulses from hillslopes which are stored in headwater channels and remobilized by debris-flows or bedload transport. Event-based sediment budget monitoring in the active Manival debris-flow torrent in the French Alps during a two-year period gave insights into the catchment-scale sediment routing during moderate rainfall intensities which occur several times each year. The monitoring was based on intensive topographic resurveys of low- and high-order channels using different techniques (cross-section surveys with total station and high-resolution channel surveys with terrestrial and airborne laser scanning). Data on sediment output volumes from the main channel were obtained by a sediment trap. Two debris-flows were observed, as well as several bedload transport flow events. Sediment budget analysis of the two debris-flows revealed that most of the debris-flow volumes were supplied by channel scouring (more than 92%). Bedload transport during autumn contributed to the sediment recharge of high-order channels by the deposition of large gravel wedges. This process is recognized as being fundamental for debris-flow occurrence during the subsequent spring and summer. A time shift of scour-and-fill sequences was observed between low- and high-order channels, revealing the discontinuous sediment transfer in the catchment during common flow events. A conceptual model of sediment routing for different event magnitude is proposed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: Statistical shape and appearance models play an important role in reducing the segmentation processing time of a vertebra and in improving results for 3D model development. Here, we describe the different steps in generating a statistical shape model (SSM) of the second cervical vertebra (C2) and provide the shape model for general use by the scientific community. The main difficulties in its construction are the morphological complexity of the C2 and its variability in the population. METHODS: The input dataset is composed of manually segmented anonymized patient computerized tomography (CT) scans. The alignment of the different datasets is done with the procrustes alignment on surface models, and then, the registration is cast as a model-fitting problem using a Gaussian process. A principal component analysis (PCA)-based model is generated which includes the variability of the C2. RESULTS: The SSM was generated using 92 CT scans. The resulting SSM was evaluated for specificity, compactness and generalization ability. The SSM of the C2 is freely available to the scientific community in Slicer (an open source software for image analysis and scientific visualization) with a module created to visualize the SSM using Statismo, a framework for statistical shape modeling. CONCLUSION: The SSM of the vertebra allows the shape variability of the C2 to be represented. Moreover, the SSM will enable semi-automatic segmentation and 3D model generation of the vertebra, which would greatly benefit surgery planning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We have studied how leaders emerge in a group as a consequence of interactions among its members. We propose that leaders can emerge as a consequence of a self-organized process based on local rules of dyadic interactions among individuals. Flocks are an example of self-organized behaviour in a group and properties similar to those observed in flocks might also explain some of the dynamics and organization of human groups. We developed an agent-based model that generated flocks in a virtual world and implemented it in a multi-agent simulation computer program that computed indices at each time step of the simulation to quantify the degree to which a group moved in a coordinated way (index of flocking behaviour) and the degree to which specific individuals led the group (index of hierarchical leadership). We ran several series of simulations in order to test our model and determine how these indices behaved under specific agent and world conditions. We identified the agent, world property, and model parameters that made stable, compact flocks emerge, and explored possible environmental properties that predicted the probability of becoming a leader.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study examines the structure of the Russian Reflexive Marker ( ся/-сь) and offers a usage-based model building on Construction Grammar and a probabilistic view of linguistic structure. Traditionally, reflexive verbs are accounted for relative to non-reflexive verbs. These accounts assume that linguistic structures emerge as pairs. Furthermore, these accounts assume directionality where the semantics and structure of a reflexive verb can be derived from the non-reflexive verb. However, this directionality does not necessarily hold diachronically. Additionally, the semantics and the patterns associated with a particular reflexive verb are not always shared with the non-reflexive verb. Thus, a model is proposed that can accommodate the traditional pairs as well as for the possible deviations without postulating different systems. A random sample of 2000 instances marked with the Reflexive Marker was extracted from the Russian National Corpus and the sample used in this study contains 819 unique reflexive verbs. This study moves away from the traditional pair account and introduces the concept of Neighbor Verb. A neighbor verb exists for a reflexive verb if they share the same phonological form excluding the Reflexive Marker. It is claimed here that the Reflexive Marker constitutes a system in Russian and the relation between the reflexive and neighbor verbs constitutes a cross-paradigmatic relation. Furthermore, the relation between the reflexive and the neighbor verb is argued to be of symbolic connectivity rather than directionality. Effectively, the relation holding between particular instantiations can vary. The theoretical basis of the present study builds on this assumption. Several new variables are examined in order to systematically model variability of this symbolic connectivity, specifically the degree and strength of connectivity between items. In usage-based models, the lexicon does not constitute an unstructured list of items. Instead, items are assumed to be interconnected in a network. This interconnectedness is defined as Neighborhood in this study. Additionally, each verb carves its own niche within the Neighborhood and this interconnectedness is modeled through rhyme verbs constituting the degree of connectivity of a particular verb in the lexicon. The second component of the degree of connectivity concerns the status of a particular verb relative to its rhyme verbs. The connectivity within the neighborhood of a particular verb varies and this variability is quantified by using the Levenshtein distance. The second property of the lexical network is the strength of connectivity between items. Frequency of use has been one of the primary variables in functional linguistics used to probe this. In addition, a new variable called Constructional Entropy is introduced in this study building on information theory. It is a quantification of the amount of information carried by a particular reflexive verb in one or more argument constructions. The results of the lexical connectivity indicate that the reflexive verbs have statistically greater neighborhood distances than the neighbor verbs. This distributional property can be used to motivate the traditional observation that the reflexive verbs tend to have idiosyncratic properties. A set of argument constructions, generalizations over usage patterns, are proposed for the reflexive verbs in this study. In addition to the variables associated with the lexical connectivity, a number of variables proposed in the literature are explored and used as predictors in the model. The second part of this study introduces the use of a machine learning algorithm called Random Forests. The performance of the model indicates that it is capable, up to a degree, of disambiguating the proposed argument construction types of the Russian Reflexive Marker. Additionally, a global ranking of the predictors used in the model is offered. Finally, most construction grammars assume that argument construction form a network structure. A new method is proposed that establishes generalization over the argument constructions referred to as Linking Construction. In sum, this study explores the structural properties of the Russian Reflexive Marker and a new model is set forth that can accommodate both the traditional pairs and potential deviations from it in a principled manner.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The three alpha2-adrenoceptor (alpha2-AR) subtypes belong to the G protein-coupled receptor superfamily and represent potential drug targets. These receptors have many vital physiological functions, but their actions are complex and often oppose each other. Current research is therefore driven towards discovering drugs that selectively interact with a specific subtype. Cell model systems can be used to evaluate a chemical compound's activity in complex biological systems. The aim of this thesis was to optimize and validate cell-based model systems and assays to investigate alpha2-ARs as drug targets. The use of immortalized cell lines as model systems is firmly established but poses several problems, since the protein of interest is expressed in a foreign environment, and thus essential components of receptor regulation or signaling cascades might be missing. Careful cell model validation is thus required; this was exemplified by three different approaches. In cells heterologously expressing alpha2A-ARs, it was noted that the transfection technique affected the test outcome; false negative adenylyl cyclase test results were produced unless a cell population expressing receptors in a homogenous fashion was used. Recombinant alpha2C-ARs in non-neuronal cells were retained inside the cells, and not expressed in the cell membrane, complicating investigation of this receptor subtype. Receptor expression enhancing proteins (REEPs) were found to be neuronalspecific adapter proteins that regulate the processing of the alpha2C-AR, resulting in an increased level of total receptor expression. Current trends call for the use of primary cells endogenously expressing the receptor of interest; therefore, primary human vascular smooth muscle cells (SMC) expressing alpha2-ARs were tested in a functional assay monitoring contractility with a myosin light chain phosphorylation assay. However, these cells were not compatible with this assay due to the loss of differentiation. A rat aortic SMC cell line transfected to express the human alpha2B-AR was adapted for the assay, and it was found that the alpha2-AR agonist, dexmedetomidine, evoked myosin light chain phosphorylation in this model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Successful management of rivers requires an understanding of the fluvial processes that govern them. This, in turn cannot be achieved without a means of quantifying their geomorphology and hydrology and the spatio-temporal interactions between them, that is, their hydromorphology. For a long time, it has been laborious and time-consuming to measure river topography, especially in the submerged part of the channel. The measurement of the flow field has been challenging as well, and hence, such measurements have long been sparse in natural environments. Technological advancements in the field of remote sensing in the recent years have opened up new possibilities for capturing synoptic information on river environments. This thesis presents new developments in fluvial remote sensing of both topography and water flow. A set of close-range remote sensing methods is employed to eventually construct a high-resolution unified empirical hydromorphological model, that is, river channel and floodplain topography and three-dimensional areal flow field. Empirical as well as hydraulic theory-based optical remote sensing methods are tested and evaluated using normal colour aerial photographs and sonar calibration and reference measurements on a rocky-bed sub-Arctic river. The empirical optical bathymetry model is developed further by the introduction of a deep-water radiance parameter estimation algorithm that extends the field of application of the model to shallow streams. The effect of this parameter on the model is also assessed in a study of a sandy-bed sub-Arctic river using close-range high-resolution aerial photography, presenting one of the first examples of fluvial bathymetry modelling from unmanned aerial vehicles (UAV). Further close-range remote sensing methods are added to complete the topography integrating the river bed with the floodplain to create a seamless high-resolution topography. Boat- cart- and backpack-based mobile laser scanning (MLS) are used to measure the topography of the dry part of the channel at a high resolution and accuracy. Multitemporal MLS is evaluated along with UAV-based photogrammetry against terrestrial laser scanning reference data and merged with UAV-based bathymetry to create a two-year series of seamless digital terrain models. These allow the evaluation of the methodology for conducting high-resolution change analysis of the entire channel. The remote sensing based model of hydromorphology is completed by a new methodology for mapping the flow field in 3D. An acoustic Doppler current profiler (ADCP) is deployed on a remote-controlled boat with a survey-grade global navigation satellite system (GNSS) receiver, allowing the positioning of the areally sampled 3D flow vectors in 3D space as a point cloud and its interpolation into a 3D matrix allows a quantitative volumetric flow analysis. Multitemporal areal 3D flow field data show the evolution of the flow field during a snow-melt flood event. The combination of the underwater and dry topography with the flow field yields a compete model of river hydromorphology at the reach scale.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main objective of this Master’s thesis is to develop a cost allocation model for a leading food industry company in Finland. The goal is to develop an allocation method for fixed overhead expenses produced in a specific production unit and create a plausible tracking system for product costs. The second objective is to construct an allocation model and modify the created model to be suited for other units as well. Costs, activities, drivers and appropriate allocation methods are studied. This thesis is started with literature review of existing theory of ABC, inspecting cost information and then conducting interviews with officials to get a general view of the requirements for the model to be constructed. The familiarization of the company started with becoming acquainted with the existing cost accounting methods. The main proposals for a new allocation model were revealed through interviews, which were utilized in setting targets for developing the new allocation method. As a result of this thesis, an Excel-based model is created based on the theoretical and empiric data. The new system is able to handle overhead costs in more detail improving the cost awareness, transparency in cost allocations and enhancing products’ cost structure. The improved cost awareness is received by selecting the best possible cost drivers for this situation. Also the capacity changes are taken into consideration, such as usage of practical or normal capacity instead of theoretical is suggested to apply. Also some recommendations for further development are made about capacity handling and cost collection.