21 resultados para Transaction level modeling
em CentAUR: Central Archive University of Reading - UK
Resumo:
Using a transactions costs framework, we examine the impact of information and communication technologies (mobile phones and radios) use on market participation in developing country agricultural markets using a novel transaction-level data set of Ghanaian farmers. Our analysis of the choice of markets by farmers suggests that market information from a broader range of markets may not always induce farmers to sell in more distant markets; instead farmers may use broader market information to enhance their bargaining power in closer markets. Finally, we find weak evidence on the impact of using mobile phones in attracting farm gate buyers.
Resumo:
Theories on the link between achievement goals and achievement emotions focus on their within-person functional relationship (i.e., intraindividual relations). However, empirical studies have failed to analyze these intraindividual relations and have instead examined between-person covariation of the two constructs (i.e., interindividual relations). Aiming to better connect theory and empirical research, the present study (N = 120 10th grade students) analyzed intraindividual relations by assessing students’ state goals and emotions using experience sampling (N = 1,409 assessments within persons). In order to replicate previous findings on interindividual relations, students’ trait goals and emotions were assessed using self-report questionnaires. Despite being statistically independent, both types of relations were consistent with theoretical expectations, as shown by multi-level modeling: Mastery goals were positive predictors of enjoyment and negative predictors of boredom and anger; performance-approach goals were positive predictors of pride; and performance-avoidance goals were positive predictors of anxiety and shame. Reasons for the convergence of intra- and interindividual findings, directions for future research, and implications for educational practice are discussed.
Resumo:
Novel imaging techniques are playing an increasingly important role in drug development, providing insight into the mechanism of action of new chemical entities. The data sets obtained by these methods can be large with complex inter-relationships, but the most appropriate statistical analysis for handling this data is often uncertain - precisely because of the exploratory nature of the way the data are collected. We present an example from a clinical trial using magnetic resonance imaging to assess changes in atherosclerotic plaques following treatment with a tool compound with established clinical benefit. We compared two specific approaches to handle the correlations due to physical location and repeated measurements: two-level and four-level multilevel models. The two methods identified similar structural variables, but higher level multilevel models had the advantage of explaining a greater proportion of variation, and the modeling assumptions appeared to be better satisfied.
The impact of deformation strain on the formation of banded clouds in idealized modeling experiments
Resumo:
Experiments are performed using an idealized version of an operational forecast model to determine the impact on banded frontal clouds of the strength of deformational forcing, low-level baroclinicity, and model representation of convection. Line convection is initiated along the front, and slantwise bands extend from the top of the line-convection elements into the cold air. This banding is attributed primarily to M adjustment. The cross-frontal spreading of the cold pool generated by the line convection leads to further triggering of upright convection in the cold air that feeds into these slantwise bands. Secondary low-level bands form later in the simulations; these are attributed to the release of conditional symmetric instability. Enhanced deformation strain leads to earlier onset of convection and more coherent line convection. A stronger cold pool is generated, but its speed is reduced relative to that seen in experiments with weaker deformational strain, because of inhibition by the strain field. Enhanced low-level baroclinicity leads to the generation of more inertial instability by line convection (for a given capping height of convection), and consequently greater strength of the slantwise circulations formed by M adjustment. These conclusions are based on experiments without a convective-parametrization scheme. Experiments using the standard or a modified scheme for this model demonstrate known problems with the use of this scheme at the awkward 4 km grid length used in these simulations. Copyright © 2008 Royal Meteorological Society
Resumo:
Ecological risk assessments must increasingly consider the effects of chemical mixtures on the environment as anthropogenic pollution continues to grow in complexity. Yet testing every possible mixture combination is impractical and unfeasible; thus, there is an urgent need for models that can accurately predict mixture toxicity from single-compound data. Currently, two models are frequently used to predict mixture toxicity from single-compound data: Concentration addition and independent action (IA). The accuracy of the predictions generated by these models is currently debated and needs to be resolved before their use in risk assessments can be fully justified. The present study addresses this issue by determining whether the IA model adequately described the toxicity of binary mixtures of five pesticides and other environmental contaminants (cadmium, chlorpyrifos, diuron, nickel, and prochloraz) each with dissimilar modes of action on the reproduction of the nematode Caenorhabditis elegans. In three out of 10 cases, the IA model failed to describe mixture toxicity adequately with significant or antagonism being observed. In a further three cases, there was an indication of synergy, antagonism, and effect-level-dependent deviations, respectively, but these were not statistically significant. The extent of the significant deviations that were found varied, but all were such that the predicted percentage effect seen on reproductive output would have been wrong by 18 to 35% (i.e., the effect concentration expected to cause a 50% effect led to an 85% effect). The presence of such a high number and variety of deviations has important implications for the use of existing mixture toxicity models for risk assessments, especially where all or part of the deviation is synergistic.
Resumo:
This paper describes the user modeling component of EPIAIM, a consultation system for data analysis in epidemiology. The component is aimed at representing knowledge of concepts in the domain, so that their explanations can be adapted to user needs. The first part of the paper describes two studies aimed at analysing user requirements. The first one is a questionnaire study which examines the respondents' familiarity with concepts. The second one is an analysis of concept descriptions in textbooks and from expert epidemiologists, which examines how discourse strategies are tailored to the level of experience of the expected audience. The second part of the paper describes how the results of these studies have been used to design the user modeling component of EPIAIM. This module works in a two-step approach. In the first step, a few trigger questions allow the activation of a stereotype that includes a "body" and an "inference component". The body is the representation of the body of knowledge that a class of users is expected to know, along with the probability that the knowledge is known. In the inference component, the learning process of concepts is represented as a belief network. Hence, in the second step the belief network is used to refine the initial default information in the stereotype's body. This is done by asking a few questions on those concepts where it is uncertain whether or not they are known to the user, and propagating this new evidence to revise the whole situation. The system has been implemented on a workstation under UNIX. An example of functioning is presented, and advantages and limitations of the approach are discussed.
Resumo:
It is known that germin, which is a marker of the onset of growth in germinating wheat, is an oxalate oxidase, and also that germins possess sequence similarity with legumin and vicilin seed storage proteins. These two pieces of information have been combined in order to generate a 3D model of germin based on the structure of vicilin and to examine the model with regard to a potential oxalate oxidase active site. A cluster of three histidine residues has been located within the conserved beta-barrel structure. While there is a relatively low level of overall sequence similarity between the model and the vicilin structures, the conservation of amino acids important in maintaining the scaffold of the beta-barrel lends confidence to the juxtaposition of the histidine residues. The cluster is similar structurally to those found in copper amine oxidase and other proteins, leading to the suggestion that it defines a metal-binding location within the oxalate oxidase active site. It is also proposed that the structural elements involved in intermolecular interactions in vicilins may play a role in oligomer formation in germin/oxalate oxidase.
Resumo:
A new primary model based on a thermodynamically consistent first-order kinetic approach was constructed to describe non-log-linear inactivation kinetics of pressure-treated bacteria. The model assumes a first-order process in which the specific inactivation rate changes inversely with the square root of time. The model gave reasonable fits to experimental data over six to seven orders of magnitude. It was also tested on 138 published data sets and provided good fits in about 70% of cases in which the shape of the curve followed the typical convex upward form. In the remainder of published examples, curves contained additional shoulder regions or extended tail regions. Curves with shoulders could be accommodated by including an additional time delay parameter and curves with tails shoulders could be accommodated by omitting points in the tail beyond the point at which survival levels remained more or less constant. The model parameters varied regularly with pressure, which may reflect a genuine mechanistic basis for the model. This property also allowed the calculation of (a) parameters analogous to the decimal reduction time D and z, the temperature increase needed to change the D value by a factor of 10, in thermal processing, and hence the processing conditions needed to attain a desired level of inactivation; and (b) the apparent thermodynamic volumes of activation associated with the lethal events. The hypothesis that inactivation rates changed as a function of the square root of time would be consistent with a diffusion-limited process.
Resumo:
Strong vertical gradients at the top of the atmospheric boundary layer affect the propagation of electromagnetic waves and can produce radar ducts. A three-dimensional, time-dependent, nonhydrostatic numerical model was used to simulate the propagation environment in the atmosphere over the Persian Gulf when aircraft observations of ducting had been made. A division of the observations into high- and low-wind cases was used as a framework for the simulations. Three sets of simulations were conducted with initial conditions of varying degrees of idealization and were compared with the observations taken in the Ship Antisubmarine Warfare Readiness/Effectiveness Measuring (SHAREM-115) program. The best results occurred with the initialization based on a sounding taken over the coast modified by the inclusion of data on low-level atmospheric conditions over the Gulf waters. The development of moist, cool, stable marine internal boundary layers (MIBL) in air flowing from land over the waters of the Gulf was simulated. The MIBLs were capped by temperature inversions and associated lapses of humidity and refractivity. The low-wind MIBL was shallower and the gradients at its top were sharper than in the high-wind case, in agreement with the observations. Because it is also forced by land–sea contrasts, a sea-breeze circulation frequently occurs in association with the MIBL. The size, location, and internal structure of the sea-breeze circulation were realistically simulated. The gradients of temperature and humidity that bound the MIBL cause perturbations in the refractivity distribution that, in turn, lead to trapping layers and ducts. The existence, location, and surface character of the ducts were well captured. Horizontal variations in duct characteristics due to the sea-breeze circulation were also evident. The simulations successfully distinguished between high- and low-wind occasions, a notable feature of the SHAREM-115 observations. The modeled magnitudes of duct depth and strength, although leaving scope for improvement, were most encouraging.
Resumo:
In this contribution we aim at anchoring Agent-Based Modeling (ABM) simulations in actual models of human psychology. More specifically, we apply unidirectional ABM to social psychological models using low level agents (i.e., intra-individual) to examine whether they generate better predictions, in comparison to standard statistical approaches, concerning the intentions of performing a behavior and the behavior. Moreover, this contribution tests to what extent the predictive validity of models of attitude such as the Theory of Planned Behavior (TPB) or Model of Goal-directed Behavior (MGB) depends on the assumption that peoples’ decisions and actions are purely rational. Simulations were therefore run by considering different deviations from rationality of the agents with a trembling hand method. Two data sets concerning respectively the consumption of soft drinks and physical activity were used. Three key findings emerged from the simulations. First, compared to standard statistical approach the agent-based simulation generally improves the prediction of behavior from intention. Second, the improvement in prediction is inversely proportional to the complexity of the underlying theoretical model. Finally, the introduction of varying degrees of deviation from rationality in agents’ behavior can lead to an improvement in the goodness of fit of the simulations. By demonstrating the potential of ABM as a complementary perspective to evaluating social psychological models, this contribution underlines the necessity of better defining agents in terms of psychological processes before examining higher levels such as the interactions between individuals.
Resumo:
Purpose: Increasing costs of health care, fuelled by demand for high quality, cost-effective healthcare has drove hospitals to streamline their patient care delivery systems. One such systematic approach is the adaptation of Clinical Pathways (CP) as a tool to increase the quality of healthcare delivery. However, most organizations still rely on are paper-based pathway guidelines or specifications, which have limitations in process management and as a result can influence patient safety outcomes. In this paper, we present a method for generating clinical pathways based on organizational semiotics by capturing knowledge from syntactic, semantic and pragmatic to social level. Design/methodology/approach: The proposed modeling approach to generation of CPs adopts organizational semiotics and enables the generation of semantically rich representation of CP knowledge. Semantic Analysis Method (SAM) is applied to explicitly represent the semantics of the concepts, their relationships and patterns of behavior in terms of an ontology chart. Norm Analysis Method (NAM) is adopted to identify and formally specify patterns of behavior and rules that govern the actions identified on the ontology chart. Information collected during semantic and norm analysis is integrated to guide the generation of CPs using best practice represented in BPMN thus enabling the automation of CP. Findings: This research confirms the necessity of taking into consideration social aspects in designing information systems and automating CP. The complexity of healthcare processes can be best tackled by analyzing stakeholders, which we treat as social agents, their goals and patterns of action within the agent network. Originality/value: The current modeling methods describe CPs from a structural aspect comprising activities, properties and interrelationships. However, these methods lack a mechanism to describe possible patterns of human behavior and the conditions under which the behavior will occur. To overcome this weakness, a semiotic approach to generation of clinical pathway is introduced. The CP generated from SAM together with norms will enrich the knowledge representation of the domain through ontology modeling, which allows the recognition of human responsibilities and obligations and more importantly, the ultimate power of decision making in exceptional circumstances.
Resumo:
Geophysical time series sometimes exhibit serial correlations that are stronger than can be captured by the commonly used first‐order autoregressive model. In this study we demonstrate that a power law statistical model serves as a useful upper bound for the persistence of total ozone anomalies on monthly to interannual timescales. Such a model is usually characterized by the Hurst exponent. We show that the estimation of the Hurst exponent in time series of total ozone is sensitive to various choices made in the statistical analysis, especially whether and how the deterministic (including periodic) signals are filtered from the time series, and the frequency range over which the estimation is made. In particular, care must be taken to ensure that the estimate of the Hurst exponent accurately represents the low‐frequency limit of the spectrum, which is the part that is relevant to long‐term correlations and the uncertainty of estimated trends. Otherwise, spurious results can be obtained. Based on this analysis, and using an updated equivalent effective stratospheric chlorine (EESC) function, we predict that an increase in total ozone attributable to EESC should be detectable at the 95% confidence level by 2015 at the latest in southern midlatitudes, and by 2020–2025 at the latest over 30°–45°N, with the time to detection increasing rapidly with latitude north of this range.
Resumo:
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.
Resumo:
In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.
Resumo:
In 2007, the world reached the unprecedented milestone of half of its people living in cities, and that proportion is projected to be 60% in 2030. The combined effect of global climate change and rapid urban growth, accompanied by economic and industrial development, will likely make city residents more vulnerable to a number of urban environmental problems, including extreme weather and climate conditions, sea-level rise, poor public health and air quality, atmospheric transport of accidental or intentional releases of toxic material, and limited water resources. One fundamental aspect of predicting the future risks and defining mitigation strategies is to understand the weather and regional climate affected by cities. For this reason, dozens of researchers from many disciplines and nations attended the Urban Weather and Climate Workshop.1 Twenty-five students from Chinese universities and institutes also took part. The presentations by the workshop's participants span a wide range of topics, from the interaction between the urban climate and energy consumption in climate-change environments to the impact of urban areas on storms and local circulations, and from the impact of urbanization on the hydrological cycle to air quality and weather prediction.