939 resultados para nonparametric demand model
Resumo:
The aim of this dissertation is to examine, model and estimate firm responses to
demand shocks by focusing on specific industries where demand shocks are well
identified. Combining reduced-form evidence and structural analysis, this dissertation
extends the economic literature by focusing on within-firm responses of firms
to two important demand shocks that are identifiable in empirical settings. First,
I focus on how firms respond to a decrease in effective demand due to competition
shocks coming from globalization. By considering China's accession to the World
Trade Organization in 2001 and its impact on the apparel industry, the aim of these
chapters is to answer how firms react to the increase in Chinese import competition,
what is the mechanism behind these responses, and how important they are in explaining
the survival of the Peruvian apparel industry. Second, I study how suppliers'
survival probability relates to the sudden disruption of their main customer-supplier
relationships with downstream manufacturers, conditional on suppliers' own idiosyncratic
characteristics such as physical productivity.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
English has been taught as a core and compulsory subject in China for decades. Recently, the demand for English in China has increased dramatically. China now has the world’s largest English-learning population. The traditional English-teaching method cannot continue to be the only approach because it merely focuses on reading, grammar and translation, which cannot meet English learners and users’ needs (i.e., communicative competence and skills in speaking and writing). This study was conducted to investigate if the Picture-Word Inductive Model (PWIM), a new pedagogical method using pictures and inductive thinking, would benefit English learners in China in terms of potential higher output in speaking and writing. With the gauge of Cognitive Load Theory (CLT), specifically, its redundancy effect, I investigated whether processing words and a picture concurrently would present a cognitive overload for English learners in China. I conducted a mixed methods research study. A quasi-experiment (pretest, intervention for seven weeks, and posttest) was conducted using 234 students in four groups in Lianyungang, China (58 fourth graders and 57 seventh graders as an experimental group with PWIM and 59 fourth graders and 60 seventh graders as a control group with the traditional method). No significant difference in the effects of PWIM was found on vocabulary acquisition based on grade levels. Observations, questionnaires with open-ended questions, and interviews were deployed to answer the three remaining research questions. A few students felt cognitively overloaded when they encountered too many writing samples, too many new words at one time, repeated words, mismatches between words and pictures, and so on. Many students listed and exemplified numerous strengths of PWIM, but a few mentioned weaknesses of PWIM. The students expressed the idea that PWIM had a positive effect on their English teaching. As integrated inferences, qualitative findings were used to explain the quantitative results that there were no significant differences of the effects of the PWIM between the experimental and control groups in both grade levels, from four contextual aspects: time constraints on PWIM implementation, teachers’ resistance, how to use PWIM and PWIM implemented in a classroom over 55 students.
Resumo:
This paper develops a simple model of the post-secondary education system in Canada that provides a useful basis for thinking about issues of capacity and access. It uses a supply-demand framework, where demand comes on the part of individuals wanting places in the system, and supply is determined not only by various directives and agreements between educational ministries and institutions (and other factors), but also the money available to universities and colleges through tuition fees. The supply and demand curves are then put together with a stylised tuition-setting rule to describe the “market” of post-secondary schooling. This market determines the number of students in the system, and their characteristics, especially as they relate to “ability” and family background, the latter being especially relevant to access issues. The manner in which various changes in the system – including tuition fees, student financial aid, government support for institutions, and the returns to schooling – are then discussed in terms of how they affect the number of students and their characteristics, or capacity and access.
Resumo:
Abstract Purpose The purpose of the study is to review recent studies published from 2007-2015 on tourism and hotel demand modeling and forecasting with a view to identifying the emerging topics and methods studied and to pointing future research directions in the field. Design/Methodology/approach Articles on tourism and hotel demand modeling and forecasting published in both science citation index (SCI) and social science citation index (SSCI) journals were identified and analyzed. Findings This review found that the studies focused on hotel demand are relatively less than those on tourism demand. It is also observed that more and more studies have moved away from the aggregate tourism demand analysis, while disaggregate markets and niche products have attracted increasing attention. Some studies have gone beyond neoclassical economic theory to seek additional explanations of the dynamics of tourism and hotel demand, such as environmental factors, tourist online behavior and consumer confidence indicators, among others. More sophisticated techniques such as nonlinear smooth transition regression, mixed-frequency modeling technique and nonparametric singular spectrum analysis have also been introduced to this research area. Research limitations/implications The main limitation of this review is that the articles included in this study only cover the English literature. Future review of this kind should also include articles published in other languages. The review provides a useful guide for researchers who are interested in future research on tourism and hotel demand modeling and forecasting. Practical implications This review provides important suggestions and recommendations for improving the efficiency of tourism and hospitality management practices. Originality/value The value of this review is that it identifies the current trends in tourism and hotel demand modeling and forecasting research and points out future research directions.
Resumo:
Pulsatile, or “on-demand”, delivery systems have the capability to deliver a therapeutic molecule at the right time/site of action and in the right amount (1). Pulsatile delivery systems present multiple benefits over conventional dosage forms and provide higher patient compliance. The combination of stimuli-responsive materials with the drug delivery capabilities of hydrogel-forming MN arrays (2) opens an interesting area of research. In the present work we describe, a stimuli-responsive hydrogel-forming microneedle (MN) array that enable delivery of a clinically-relevant model drug (ibuprofen) upon application of UV radiation (Figure 1A). MN arrays were prepared using a micromolding technique using a polymer prepared from 2-hydroxyethyl methacrylate (HEMA) and ethylene glycol dimethacrylate (EGDMA) (Figure 1B). The arrays were loaded with up to 5% (w/w) ibuprofen included in a light-responsible conjugate (3,5-dimethoxybenzoin conjugate) (2). The presence of the conjugate inside the MN arrays was confirmed by Raman spectroscopy measurements. MN arrays were tested in vitro showing that they were able to deliver up to three doses of 50 mg of ibuprofen after application of an optical trigger (wavelength of 365 nm) over a long period of time (up to 160 hours) (Figure 1C and 1D). The work presented here is a probe of concept and a modified version of the system should be used as UV radiation is shown to be the major etiologic agent in the development of skin cancers. Consequently, for future applications of this technology an alternative design should be developed. Based on the previous research dealing with hydrogel forming MN arrays a suitable strategy will be to use hydrogel-forming MN arrays containing a backing layer made with the material described in this work as the drug reservoir (2). Finally, a porous layer of a material that blocks UV radiation should be included between the MN array and the drug reservoir. Therefore radiation can be applied to the system without reaching the skin surface. Therefore after modification, the system described here interesting properties as “on-demand” release system for prolonged periods of time. This technology has potential for use in “on-demand” delivery of a wide range of drugs in a variety of applications relevant to enhanced patient care.
Resumo:
Insect pollination underpins apple production but the extent to which different pollinator guilds supply this service, particularly across different apple varieties, is unknown. Such information is essential if appropriate orchard management practices are to be targeted and proportional to the potential benefits pollinator species may provide. Here we use a novel combination of pollinator effectiveness assays (floral visit effectiveness), orchard field surveys (flower visitation rate) and pollinator dependence manipulations (pollinator exclusion experiments) to quantify the supply of pollination services provided by four different pollinator guilds to the production of four commercial varieties of apple. We show that not all pollinators are equally effective at pollinating apples, with hoverflies being less effective than solitary bees and bumblebees, and the relative abundance of different pollinator guilds visiting apple flowers of different varieties varies significantly. Based on this, the taxa specific economic benefits to UK apple production have been established. The contribution of insect pollinators to the economic output in all varieties was estimated to be £92.1M across the UK, with contributions varying widely across taxa: solitary bees (£51.4M), honeybees (£21.4M), bumblebees (£18.6M) and hoverflies (£0.7M). This research highlights the differences in the economic benefits of four insect pollinator guilds to four major apple varieties in the UK. This information is essential to underpin appropriate investment in pollination services management and provides a model that can be used in other entomolophilous crops to improve our understanding of crop pollination ecology.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.
Resumo:
We present an IP-based nonparametric (revealed preference) testing procedure for rational consumption behavior in terms of general collective models, which include consumption externalities and public consumption. An empirical application to data drawn from the Russia Longitudinal Monitoring Survey (RLMS) demonstrates the practical usefulness of the procedure. Finally, we present extensions of the testing procedure to evaluate the goodness-of- t of the collective model subject to testing, and to quantify and improve the power of the corresponding collective rationality tests.
Resumo:
In this dissertation I quantify residential behavior response to interventions designed to reduce electricity demand at different periods of the day. In the first chapter, I examine the effect of information provision coupled with bimonthly billing, monthly billing, and in-home displays, as well as a time-of-use (TOU) pricing scheme to measure consumption over each month of the Irish Consumer Behavior Trial. I find that time-of-use pricing with real time usage information reduces electricity usage up to 8.7 percent during peak times at the start of the trial but the effect decays over the first three months and after three months the in-home display group is indistinguishable from the monthly treatment group. Monthly and bi-monthly billing treatments are not found to be statistically different from another. These findings suggest that increasing billing reports to the monthly level may be more cost effective for electricity generators who wish to decrease expenses and consumption, rather than providing in-home displays. In the following chapter, I examine the response of residential households after exposure to time of use tariffs at different hours of the day. I find that these treatments reduce electricity consumption during peak hours by almost four percent, significantly lowering demand. Within the model, I find evidence of overall conservation in electricity used. In addition, weekday peak reductions appear to carry over to the weekend when peak pricing is not present, suggesting changes in consumer habit. The final chapter of my dissertation imposes a system wide time of use plan to analyze the potential reduction in carbon emissions from load shifting based on the Ireland and Northern Single Electricity Market. I find that CO2 emissions savings are highest during the winter months when load demand is highest and dirtier power plants are scheduled to meet peak demand. TOU pricing allows for shifting in usage from peak usage to off peak usage and this shift in load can be met with cleaner and cheaper generated electricity from imports, high efficiency gas units, and hydro units.
Resumo:
The share of variable renewable energy in electricity generation has seen exponential growth during the recent decades, and due to the heightened pursuit of environmental targets, the trend is to continue with increased pace. The two most important resources, wind and insolation both bear the burden of intermittency, creating a need for regulation and posing a threat to grid stability. One possibility to deal with the imbalance between demand and generation is to store electricity temporarily, which was addressed in this thesis by implementing a dynamic model of adiabatic compressed air energy storage (CAES) with Apros dynamic simulation software. Based on literature review, the existing models due to their simplifications were found insufficient for studying transient situations, and despite of its importance, the investigation of part load operation has not yet been possible with satisfactory precision. As a key result of the thesis, the cycle efficiency at design point was simulated to be 58.7%, which correlated well with literature information, and was validated through analytical calculations. The performance at part load was validated against models shown in literature, showing good correlation. By introducing wind resource and electricity demand data to the model, grid operation of CAES was studied. In order to enable the dynamic operation, start-up and shutdown sequences were approximated in dynamic environment, as far as is known, the first time, and a user component for compressor variable guide vanes (VGV) was implemented. Even in the current state, the modularly designed model offers a framework for numerous studies. The validity of the model is limited by the accuracy of VGV correlations at part load, and in addition the implementation of heat losses to the thermal energy storage is necessary to enable longer simulations. More extended use of forecasts is one of the important targets of development, if the system operation is to be optimised in future.
Resumo:
Sediment oxygen demand (SOD) can be a significant oxygen sink in various types of water bodies, particularly slow-moving waters with substantial organic sediment accumulation. In most settings where SOD is a concern, the prevailing hydraulic conditions are such that the impact of sediment resuspension on SOD is not considered. However, in the case of Bubbly Creek in Chicago, Illinois, the prevailing slack water conditions are interrupted by infrequent intervals of very high flow rates associated with pumped combined sewer overflow (CSO) during intense hydrologic events. These events can cause resuspension of the highly organic, nutrient-rich bottom sediments, resulting in precipitous drawdown of dissolved oxygen (DO) in the water column. While many past studies have addressed the dependence of SOD on near-bed velocity and bed shear stress prior to the point of sediment resuspension, there has been limited research that has attempted to characterize the complex and dynamic phenomenon of resuspended-sediment oxygen demand. To address this issue, a new in situ experimental apparatus referred to as the U of I Hydrodynamic SOD Sampler was designed to achieve a broad range of velocities and associated bed shear stresses. This allowed SOD to be analyzed across the spectrum of no sediment resuspension associated with low velocity/ bed shear stress through full sediment resuspension associated with high velocity / bed shear stress. The current study split SOD into two separate components: (1) SODNR is the sediment oxygen demand associated with non-resuspension conditions and is a surface sink calculated using traditional methods to yield a value with units (g/m2/day); and (2) SODR is the oxygen demand associated with resuspension conditions, which is a volumetric sink most accurately characterized using non-traditional methods and units that reflect suspension in the water column (mg/L/day). In the case of resuspension, the suspended sediment concentration was analyzed as a function of bed shear stress, and a formulation was developed to characterize SODR as a function of suspended sediment concentration in a form similar to first-order biochemical oxygen demand (BOD) kinetics with Monod DO term. The results obtained are intended to be implemented into a numerical model containing hydrodynamic, sediment transport, and water quality components to yield oxygen demand varying in both space and time for specific flow events. Such implementation will allow evaluation of proposed Bubbly Creek water quality improvement alternatives which take into account the impact of SOD under various flow conditions. Although the findings were based on experiments specific to the conditions in Bubbly Creek, the techniques and formulations developed in this study should be applicable to similar sites.
Resumo:
A self-organising model of macadamia, expressed using L-Systems, was used to explore aspects of canopy management. A small set of parameters control the basic architecture of the model, with a high degree of self-organisation occurring to determine the fate and growth of buds. Light was sensed at the leaf level and used to represent vigour and accumulated basipetally. Buds also sensed light so as to provide demand in the subsequent redistribution of the vigour. Empirical relationships were derived from a set of 24 completely digitised trees after conversion to multiscale tree graphs (MTG) and analysis with the OpenAlea software library. The ability to write MTG files was embedded within the model so that various tree statistics could be exported for each run of the model. To explore the parameter space a series of runs was completed using a high-throughput computing platform. When combined with MTG generation and analysis with OpenAlea it provided a convenient way in which thousands of simulations could be explored. We allowed the model trees to develop using self-organisation and simulated cultural practices such as hedging, topping, removal of the leader and limb removal within a small representation of an orchard. The model provides insight into the impact of these practices on potential for growth and the light distribution within the canopy and to the orchard floor by coupling the model with a path-tracing program to simulate the light environment. The lessons learnt from this will be applied to other evergreen, tropical fruit and nut trees.
Resumo:
BACKGROUND: Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. METHODS: To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. RESULTS: Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. CONCLUSIONS: If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.