847 resultados para Discrete Regression and Qualitative Choice Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most statistical methods use hypothesis testing. Analysis of variance, regression, discrete choice models, contingency tables, and other analysis methods commonly used in transportation research share hypothesis testing as the means of making inferences about the population of interest. Despite the fact that hypothesis testing has been a cornerstone of empirical research for many years, various aspects of hypothesis tests commonly are incorrectly applied, misinterpreted, and ignored—by novices and expert researchers alike. On initial glance, hypothesis testing appears straightforward: develop the null and alternative hypotheses, compute the test statistic to compare to a standard distribution, estimate the probability of rejecting the null hypothesis, and then make claims about the importance of the finding. This is an oversimplification of the process of hypothesis testing. Hypothesis testing as applied in empirical research is examined here. The reader is assumed to have a basic knowledge of the role of hypothesis testing in various statistical methods. Through the use of an example, the mechanics of hypothesis testing is first reviewed. Then, five precautions surrounding the use and interpretation of hypothesis tests are developed; examples of each are provided to demonstrate how errors are made, and solutions are identified so similar errors can be avoided. Remedies are provided for common errors, and conclusions are drawn on how to use the results of this paper to improve the conduct of empirical research in transportation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Depression is a major public health problem worldwide and is currently ranked second to heart disease for years lost due to disability. For many decades, international research has found that depressive symptoms occur more frequently among low socioeconomic (SES) individuals than their more-advantaged peers. However, the reasons as to why those of low socioeconomic groups suffer more depressive symptoms are not well understood. Studies investigating the prevalence of depression and its association with SES emanate largely from developed countries, with little research among developing countries. In particular, there is a serious dearth of research on depression and no investigation of its association with SES in Vietnam. The aims of the research presented in this Thesis are to: estimate the prevalence of depressive symptoms among Vietnamese adults, examine the nature and extent of the association between SES and depression and to elucidate causal pathways linking SES to depressive symptoms Methods The research was conducted between September 2008 and November 2009 in Hue city in central Vietnam and used a combination of qualitative (in-depth interviews) and quantitative (survey) data collection methods. The qualitative study contributed to the development of the theoretical model and to the refinement of culturally-appropriate data collection instruments for the quantitative study. The main survey comprised a cross-sectional population–based survey with randomised cluster sampling. A sample of 1976 respondents aged between 25-55 years from ten randomly-selected residential zones (quarters) of Hue city completed the questionnaire (response rate 95.5%). Measures SES was classified using three indicators: education, occupation and income. The Center for Epidemiologic Studies-Depression (CES-D) scale was used to measure depressive symptoms (range0-51, mean=11.0, SD=8.5). Three cut-off points for the CES-D scores were applied: ‘at risk for clinical depression’ (16 or above), ‘depressive symptoms’ (above 21) and ‘depression’ (above 25). Six psychosocial indicators: life time trauma, chronic stress, recent life events, social support, self esteem, and mastery were hypothesized to mediate the association between SES and depressive symptoms. Analyses The prevalence of depressive symptoms were analysed using bivariate analyses. The multivariable analytic phase comprised of ordinary least squares regression, in accordance with Baron and Kenny’s three-step framework for mediation modeling. All analyses were adjusted for a range of confounders, including age, marital status, smoking, drinking and chronic diseases and the mediation models were stratified by gender. Results Among these Vietnamese adults, 24.3% were at or above the cut-off for being ‘at risk for clinical depression’, 11.9% were classified as having depressive symptoms and 6.8% were categorised as having depression. SES was inversely related to depressive symptoms: the least educated those with low occupational status or with the lowest incomes reported more depressive symptoms. Socioeconomicallydisadvantaged individuals were more likely to report experiencing stress (life time trauma, chronic stress or recent life events), perceived less social support and reported fewer personal resources (self esteem and mastery) than their moreadvantaged counterparts. These psychosocial resources were all significantly associated with depressive symptoms independent of SES. Each psychosocial factor showed a significant mediating effect on the association between SES and depressive symptoms. This was found for all measures of SES, and for males and females. In particular, personal resources (mastery, self esteem) and chronic stress accounted for a substantial proportion of the variation in depressive symptoms between socioeconomic groups. Social support and recent life events contributed modestly to socioeconomic differences in depressive symptoms, whereas lifetime trauma contributed the least to these inequalities. Conclusion This is the first known study in Vietnam or any developing country to systematically examine the extent to which psychosocial factors mediate the relationship between SES and depression. The study contributes new evidence regarding the burden of depression in Vietnam. The findings have practical relevance for advocacy, for mental health promotion and health-care services, and point to the need for programs that focus on building a sense of personal mastery and self esteem. More broadly, the work presented in this Thesis contributes to the international scientific literature on the social determinants of depression.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Gender differences in cycling are well-documented. However, most analyses of gender differences make broad comparisons, with few studies modeling male and female cycling patterns separately for recreational and transport cycling. This modeling is important, in order to improve our efforts to promote cycling to women and men in countries like Australia with low rates of transport cycling. The main aim of this study was to examine gender differences in cycling patterns and in motivators and constraints to cycling, separately for recreational and transport cycling. Methods: Adult members of a Queensland, Australia, community bicycling organization completed an online survey about their cycling patterns; cycling purposes; and personal, social and perceived environmental motivators and constraints (47% response rate). Closed and open-end questions were completed. Using the quantitative data, multivariable linear, logistic and ordinal regression models were used to examine associations between gender and cycling patterns, motivators and constraints. The qualitative data were thematically analysed to expand upon the quantitative findings. Results: In this sample of 1862 bicyclists, men were more likely than women to cycle for recreation and for transport, and they cycled for longer. Most transport cycling was for commuting, with men more likely than women to commute by bicycle. Men were more likely to cycle on-road, and women off-road. However, most men and women did not prefer to cycle on-road without designed bicycle lanes, and qualitative data indicated a strong preference by men and women for bicycle-only off-road paths. Both genders reported personal factors (health and enjoyment related) as motivators for cycling, although women were more likely to agree that other personal, social and environmental factors were also motivating. The main constraints for both genders and both cycling purposes were perceived environmental factors related to traffic conditions, motorist aggression and safety. Women, however, reported more constraints, and were more likely to report as constraints other environmental factors and personal factors. Conclusion: Differences found in men’s and women’s cycling patterns, motivators and constraints should be considered in efforts to promote cycling, particularly in efforts to increase cycling for transport.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective The present paper reports on a quality improvement activity examining implementation of A Better Choice Healthy Food and Drink Supply Strategy for Queensland Health Facilities (A Better Choice). A Better Choice is a policy to increase supply and promotion of healthy foods and drinks and decrease supply and promotion of energy-dense, nutrient-poor choices in all food supply areas including food outlets, staff dining rooms, vending machines, tea trolleys, coffee carts, leased premises, catering, fundraising, promotion and advertising. Design An online survey targeted 278 facility managers to collect self-reported quantitative and qualitative data. Telephone interviews were sought concurrently with the twenty-five A Better Choice district contact officers to gather qualitative information. Setting Public sector-owned and -operated health facilities in Queensland, Australia. Subjects One hundred and thirty-four facility managers and twenty-four district contact officers participated with response rates of 48·2 % and 96·0 %, respectively. Results Of facility managers, 78·4 % reported implementation of more than half of the A Better Choice requirements including 24·6 % who reported full strategy implementation. Reported implementation was highest in food outlets, staff dining rooms, tea trolleys, coffee carts, internal catering and drink vending machines. Reported implementation was more problematic in snack vending machines, external catering, leased premises and fundraising. Conclusions Despite methodological challenges, the study suggests that policy approaches to improve the food and drink supply can be implemented successfully in public-sector health facilities, although results can be limited in some areas. A Better Choice may provide a model for improving food supply in other health and workplace settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Multilevel and spatial models are being increasingly used to obtain substantive information on area-level inequalities in cancer survival. Multilevel models assume independent geographical areas, whereas spatial models explicitly incorporate geographical correlation, often via a conditional autoregressive prior. However the relative merits of these methods for large population-based studies have not been explored. Using a case-study approach, we report on the implications of using multilevel and spatial survival models to study geographical inequalities in all-cause survival. Methods Multilevel discrete-time and Bayesian spatial survival models were used to study geographical inequalities in all-cause survival for a population-based colorectal cancer cohort of 22,727 cases aged 20–84 years diagnosed during 1997–2007 from Queensland, Australia. Results Both approaches were viable on this large dataset, and produced similar estimates of the fixed effects. After adding area-level covariates, the between-area variability in survival using multilevel discrete-time models was no longer significant. Spatial inequalities in survival were also markedly reduced after adjusting for aggregated area-level covariates. Only the multilevel approach however, provided an estimation of the contribution of geographical variation to the total variation in survival between individual patients. Conclusions With little difference observed between the two approaches in the estimation of fixed effects, multilevel models should be favored if there is a clear hierarchical data structure and measuring the independent impact of individual- and area-level effects on survival differences is of primary interest. Bayesian spatial analyses may be preferred if spatial correlation between areas is important and if the priority is to assess small-area variations in survival and map spatial patterns. Both approaches can be readily fitted to geographically enabled survival data from international settings

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose.: To develop three-surface paraxial schematic eyes with different ages and sexes based on data for 7- and 14-year-old Chinese children from the Anyang Childhood Eye Study. Methods.: Six sets of paraxial schematic eyes, including 7-year-old eyes, 7-year-old male eyes, 7-year-old female eyes, 14-year-old eyes, 14-year-old male eyes, and 14-year-old female eyes, were developed. Both refraction-dependent and emmetropic eye models were developed, with the former using linear dependence of ocular parameters on refraction. Results.: A total of 2059 grade 1 children (boys 58%) and 1536 grade 8 children (boys 49%) were included, with mean age of 7.1 ± 0.4 and 13.7 ± 0.5 years, respectively. Changes in these schematic eyes with aging are increased anterior chamber depth, decreased lens thickness, increased vitreous chamber depth, increased axial length, and decreased lens equivalent power. Male schematic eyes have deeper anterior chamber depth, longer vitreous chamber depth, longer axial length, and lower lens equivalent power than female schematic eyes. Changes in the schematic eyes with positive increase in refraction are decreased anterior chamber depth, increased lens thickness, decreased vitreous chamber depth, decreased axial length, increased corneal radius of curvature, and increased lens power. In general, the emmetropic schematic eyes have biometric parameters similar to those arising from regression fits for the refraction-dependent schematic eyes. Conclusions.: The paraxial schematic eyes of Chinese children may be useful for myopia research and for facilitating comparison with other children with the same or different racial backgrounds and living in different places.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AbstractObjectives Decision support tools (DSTs) for invasive species management have had limited success in producing convincing results and meeting users' expectations. The problems could be linked to the functional form of model which represents the dynamic relationship between the invasive species and crop yield loss in the DSTs. The objectives of this study were: a) to compile and review the models tested on field experiments and applied to DSTs; and b) to do an empirical evaluation of some popular models and alternatives. Design and methods This study surveyed the literature and documented strengths and weaknesses of the functional forms of yield loss models. Some widely used models (linear, relative yield and hyperbolic models) and two potentially useful models (the double-scaled and density-scaled models) were evaluated for a wide range of weed densities, maximum potential yield loss and maximum yield loss per weed. Results Popular functional forms include hyperbolic, sigmoid, linear, quadratic and inverse models. Many basic models were modified to account for the effect of important factors (weather, tillage and growth stage of crop at weed emergence) influencing weed–crop interaction and to improve prediction accuracy. This limited their applicability for use in DSTs as they became less generalized in nature and often were applicable to a much narrower range of conditions than would be encountered in the use of DSTs. These factors' effects could be better accounted by using other techniques. Among the model empirically assessed, the linear model is a very simple model which appears to work well at sparse weed densities, but it produces unrealistic behaviour at high densities. The relative-yield model exhibits expected behaviour at high densities and high levels of maximum yield loss per weed but probably underestimates yield loss at low to intermediate densities. The hyperbolic model demonstrated reasonable behaviour at lower weed densities, but produced biologically unreasonable behaviour at low rates of loss per weed and high yield loss at the maximum weed density. The density-scaled model is not sensitive to the yield loss at maximum weed density in terms of the number of weeds that will produce a certain proportion of that maximum yield loss. The double-scaled model appeared to produce more robust estimates of the impact of weeds under a wide range of conditions. Conclusions Previously tested functional forms exhibit problems for use in DSTs for crop yield loss modelling. Of the models evaluated, the double-scaled model exhibits desirable qualitative behaviour under most circumstances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we are concerned with finding representations of the algebra of SU(3) vector and axial-vector charge densities at infinite momentum (the "current algebra") to describe the mesons, idealizing the real continua of multiparticle states as a series of discrete resonances of zero width. Such representations would describe the masses and quantum numbers of the mesons, the shapes of their Regge trajectories, their electromagnetic and weak form factors, and (approximately, through the PCAC hypothesis) pion emission or absorption amplitudes.

We assume that the mesons have internal degrees of freedom equivalent to being made of two quarks (one an antiquark) and look for models in which the mass is SU(3)-independent and the current is a sum of contributions from the individual quarks. Requiring that the current algebra, as well as conditions of relativistic invariance, be satisfied turns out to be very restrictive, and, in fact, no model has been found which satisfies all requirements and gives a reasonable mass spectrum. We show that using more general mass and current operators but keeping the same internal degrees of freedom will not make the problem any more solvable. In particular, in order for any two-quark solution to exist it must be possible to solve the "factorized SU(2) problem," in which the currents are isospin currents and are carried by only one of the component quarks (as in the K meson and its excited states).

In the free-quark model the currents at infinite momentum are found using a manifestly covariant formalism and are shown to satisfy the current algebra, but the mass spectrum is unrealistic. We then consider a pair of quarks bound by a potential, finding the current as a power series in 1/m where m is the quark mass. Here it is found impossible to satisfy the algebra and relativistic invariance with the type of potential tried, because the current contributions from the two quarks do not commute with each other to order 1/m3. However, it may be possible to solve the factorized SU(2) problem with this model.

The factorized problem can be solved exactly in the case where all mesons have the same mass, using a covariant formulation in terms of an internal Lorentz group. For a more realistic, nondegenerate mass there is difficulty in covariantly solving even the factorized problem; one model is described which almost works but appears to require particles of spacelike 4-momentum, which seem unphysical.

Although the search for a completely satisfactory model has been unsuccessful, the techniques used here might eventually reveal a working model. There is also a possibility of satisfying a weaker form of the current algebra with existing models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use a computational homogenisation approach to derive a non linear constitutive model for lattice materials. A representative volume element (RVE) of the lattice is modelled by means of discrete structural elements, and macroscopic stress-strain relationships are numerically evaluated after applying appropriate periodic boundary conditions to the RVE. The influence of the choice of the RVE on the predictions of the model is discussed. The model has been used for the analysis of the hexagonal and the triangulated lattices subjected to large strains. The fidelity of the model has been demonstrated by analysing a plate with a central hole under prescribed in plane compressive and tensile loads, and then comparing the results from the discrete and the homogenised models. © 2013 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional hedonic techniques for estimating the value of local amenities rely on the assumption that households move freely among locations. We show that when moving is costly, the variation in housing prices and wages across locations may no longer reflect the value of differences in local amenities. We develop an alternative discrete-choice approach that models the household location decision directly, and we apply it to the case of air quality in US metro areas in 1990 and 2000. Because air pollution is likely to be correlated with unobservable local characteristics such as economic activity, we instrument for air quality using the contribution of distant sources to local pollution-excluding emissions from local sources, which are most likely to be correlated with local conditions. Our model yields an estimated elasticity of willingness to pay with respect to air quality of 0.34-0.42. These estimates imply that the median household would pay $149-$185 (in constant 1982-1984 dollars) for a one-unit reduction in average ambient concentrations of particulate matter. These estimates are three times greater than the marginal willingness to pay estimated by a conventional hedonic model using the same data. Our results are robust to a range of covariates, instrumenting strategies, and functional form assumptions. The findings also confirm the importance of instrumenting for local air pollution. © 2009 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During recent decades anthropogenic activities have dramatically impacted the Black Sea ecosystem. High levels of riverine nutrient input during the 1970s and 1980s caused eutrophic conditions including intense algal blooms resulting in hypoxia and the subsequent collapse of benthic habitats on the northwestern shelf. Intense fishing pressure also depleted stocks of many apex predators, contributing to an increase in planktivorous fish that are now the focus of fishing efforts. Additionally, the Black Sea's ecosystem changed even further with the introduction of exotic species. Economic collapse of the surrounding socialist republics in the early 1990s resulted in decreased nutrient loading which has allowed the Black Sea ecosystem to start to recover, but under rapidly changing economic and political conditions, future recovery is uncertain. In this study we use a multidisciplinary approach to integrate information from socio-economic and ecological systems to model the effects of future development scenarios on the marine environment of the northwestern Black Sea shelf. The Driver–Pressure–State-Impact-Response framework was used to construct conceptual models, explicitly mapping impacts of socio-economic Drivers on the marine ecosystem. Bayesian belief networks (BBNs), a stochastic modelling technique, were used to quantify these causal relationships, operationalise models and assess the effects of alternative development paths on the Black Sea ecosystem. BBNs use probabilistic dependencies as a common metric, allowing the integration of quantitative and qualitative information. Under the Baseline Scenario, recovery of the Black Sea appears tenuous as the exploitation of environmental resources (agriculture, fishing and shipping) increases with continued economic development of post-Soviet countries. This results in the loss of wetlands through drainage and reclamation. Water transparency decreases as phytoplankton bloom and this deterioration in water quality leads to the degradation of coastal plant communities (Cystoseira, seagrass) and also Phyllophora habitat on the shelf. Decomposition of benthic plants results in hypoxia killing flora and fauna associated with these habitats. Ecological pressure from these factors along with constant levels of fishing activity results in target stocks remaining depleted. Of the four Alternative Scenarios, two show improvements on the Baseline ecosystem condition, with improved waste water treatment and reduced fishing pressure, while the other two show a worsening, due to increased natural resource exploitation leading to rapid reversal of any recent ecosystem recovery. From this we conclude that variations in economic policy have significant consequences for the health of the Black Sea, and ecosystem recovery is directly linked to social–economic choices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ecosystem models are often assessed using quantitative metrics of absolute ecosystem state, but these model-data comparisons are disproportionately vulnerable to discrepancies in the location of important circulation features. An alternative method is to demonstrate the models capacity to represent ecosystem function; the emergence of a coherent natural relationship in a simulation indicates that the model may have an appropriate representation of the ecosystem functions that lead to the emergent relationship. Furthermore, as emergent properties are large-scale properties of the system, model validation with emergent properties is possible even when there is very little or no appropriate data for the region under study, or when the hydrodynamic component of the model differs significantly from that observed in nature at the same location and time. A selection of published meta-analyses are used to establish the validity of a complex marine ecosystem model and to demonstrate the power of validation with emergent properties. These relationships include the phytoplankton community structure, the ratio of carbon to chlorophyll in phytoplankton and particulate organic matter, the ratio of particulate organic carbon to particulate organic nitrogen and the stoichiometric balance of the ecosystem. These metrics can also inform aspects of the marine ecosystem model not available from traditional quantitative and qualitative methods. For instance, these emergent properties can be used to validate the design decisions of the model, such as the range of phytoplankton functional types and their behaviour, the stoichiometric flexibility with regards to each nutrient, and the choice of fixed or variable carbon to nitrogen ratios.