940 resultados para print on demand


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Demand forecasting is one of the fundamental managerial tasks. Most companies do not know their future demands, so they have to make plans based on demand forecasts. The literature offers many methods and approaches for producing forecasts. When selecting the forecasting approach, companies need to estimate the benefits provided by particular methods, as well as the resources that applying the methods call for. Former literature points out that even though many forecasting methods are available, selecting a suitable approach and implementing and managing it is a complex cross-functional matter. However, research that focuses on the managerial side of forecasting is relatively rare. This thesis explores the managerial problems that are involved when demand forecasting methods are applied in a context where a company produces products for other manufacturing companies. Industrial companies have some characteristics that differ from consumer companies, e.g. typically a lower number of customers and closer relationships with customers than in consumer companies. The research questions of this thesis are: 1. What kind of challenges are there in organizing an adequate forecasting process in the industrial context? 2. What kind of tools of analysis can be utilized to support the improvement of the forecasting process? The main methodological approach in this study is design science, where the main objective is to develop tentative solutions to real-life problems. The research data has been collected from two organizations. Managerial problems in organizing demand forecasting can be found in four interlinked areas: 1. defining the operational environment for forecasting, 2. defining the forecasting methods, 3. defining the organizational responsibilities, and 4. defining the forecasting performance measurement process. In all these areas, examples of managerial problems are described, and approaches for mitigating these problems are outlined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Demand forecasting is one of the fundamental managerial tasks. Most companies do not know their future demands, so they have to make plans based on demand forecasts. The literature offers many methods and approaches for producing forecasts. Former literature points out that even though many forecasting methods and approaches are available, selecting a suitable approach and implementing and managing it is a complex cross-functional matter. However, it’s relatively rare that researches are focused on the differences in forecasting between consumer and industrial companies. The aim of this thesis is to investigate the potential of improving demand forecasting practices for B2B and B2C sectors in the global supply chains. Business to business (B2B) sector produces products for other manufacturing companies. On the other hand, consumer (B2C) sector provides goods for individual buyers. Usually industrial sector have a lower number of customers and closer relationships with them. The research questions of this thesis are: 1) What are the main differences and similarities in demand planning between B2B and B2C sectors? 2) How the forecast performance for industrial and consumer companies can be improved? The main methodological approach in this study is design science, where the main objective is to develop tentative solutions to real-life problems. The research data has been collected from a case company. Evaluation and improving in organizing demand forecasting can be found in three interlinked areas: 1) demand planning operational environment, 2) demand forecasting techniques, 3) demand information sharing scenarios. In this research current B2B and B2C demand practices are presented with further comparison between those two sectors. It was found that B2B and B2C sectors have significant differences in demand practices. This research partly filled the theoretical gap in understanding the difference in forecasting in consumer and industrial sectors. In all these areas, examples of managerial problems are described, and approaches for mitigating these problems are outlined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Even though antenatal care is universally regarded as important, determinants of demand for antenatal care have not been widely studied. Evidence concerning which and how socioeconomic conditions influence whether a pregnant woman attends or not at least one antenatal consultation or how these factors affect the absences to antenatal consultations is very limited. In order to generate this evidence, a two-stage analysis was performed with data from the Demographic and Health Survey carried out by Profamilia in Colombia during 2005. The first stage was run as a logit model showing the marginal effects on the probability of attending the first visit and an ordinary least squares model was performed for the second stage. It was found that mothers living in the pacific region as well as young mothers seem to have a lower probability of attending the first visit but these factors are not related to the number of absences to antenatal consultation once the first visit has been achieved. The effect of health insurance was surprising because of the differing effects that the health insurers showed. Some familiar and personal conditions such as willingness to have the last children and number of previous children, demonstrated to be important in the determination of demand. The effect of mother’s educational attainment was proved as important whereas the father’s educational achievement was not. This paper provides some elements for policy making in order to increase the demand inducement of antenatal care, as well as stimulating research on demand for specific issues on health.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A cross-sectional survey investigating the contribution of free-range village chickens to household economies was carried out in four administrative districts within 60km of Accra. Answers were provided by 101 men and 99 women. Nearly all respondents claimed to keep chickens for meat, with a far smaller percentage claiming to keep them for egg production. Over 80% of respondents kept chickens to supplement their incomes. The proportion of the flock eaten varied between administrative areas (p=0.009 and p=0.027), although this was possibly a consequence of differences in consumption patterns between occupation of the respondent, land area cultivated and flock size. The proportion of chickens sold varied as a result of differences in flock size (p=0.013), the proportion sold increasing with number of birds in the flock. Respondents generally agreed that chickens could be sold without difficulty. A majority of chicken sales were from the farm gate, directly to consumers or traders. Sales were on demand or when the owner needed money. Money from the sale was kept by the owner of the chicken and the money was spent on personal needs. The proportion of the flock sold varied between administrative areas (p=0.025) and occupation of the respondent (p=0.040). Respondents describing animal production as their main occupation tended to have greater reliance on chicken sales for their income. Consideration is given to estimating the offtake from the flock and the financial contribution to the household.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Deep Brain Stimulation (DBS) has been successfully used throughout the world for the treatment of Parkinson's disease symptoms. To control abnormal spontaneous electrical activity in target brain areas DBS utilizes a continuous stimulation signal. This continuous power draw means that its implanted battery power source needs to be replaced every 18–24 months. To prolong the life span of the battery, a technique to accurately recognize and predict the onset of the Parkinson's disease tremors in human subjects and thus implement an on-demand stimulator is discussed here. The approach is to use a radial basis function neural network (RBFNN) based on particle swarm optimization (PSO) and principal component analysis (PCA) with Local Field Potential (LFP) data recorded via the stimulation electrodes to predict activity related to tremor onset. To test this approach, LFPs from the subthalamic nucleus (STN) obtained through deep brain electrodes implanted in a Parkinson patient are used to train the network. To validate the network's performance, electromyographic (EMG) signals from the patient's forearm are recorded in parallel with the LFPs to accurately determine occurrences of tremor, and these are compared to the performance of the network. It has been found that detection accuracies of up to 89% are possible. Performance comparisons have also been made between a conventional RBFNN and an RBFNN based on PSO which show a marginal decrease in performance but with notable reduction in computational overhead.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ants often form mutualistic interactions with aphids, soliciting honeydew in return for protective services. Under certain circumstances, however, ants will prey upon aphids. In addition, in the presence of ants aphids may increase the quantity or quality of honeydew produced, which is costly. Through these mechanisms, ant attendance can reduce aphid colony growth rates. However, it is unknown whether demand from within the ant colony can affect the ant-aphid interaction. In a factorial experiment, we tested whether the presence of larvae in Lasius niger ant colonies affected the growth rate of Aphis fabae colonies. Other explanatory variables tested were the origin of ant colonies (two separate colonies were used) and previous diet (sugar only or sugar and protein). We found that the presence of larvae in the ant colony significantly reduced the growth rate of aphid colonies. Previous diet and colony origin did not affect aphid colony growth rates. Our results suggest that ant colonies balance the flow of two separate resources from aphid colonies- renewable sugars or a protein-rich meal, depending on demand from ant larvae within the nest. Aphid payoffs from the ant-aphid interaction may change on a seasonal basis, as the demand from larvae within the ant colony waxes and wanes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Six different Digital Proofing Systems from three different techniques have been evaluated as totechnique, printing quality, economy and usability. Digital proof from two paper qualities, coatedand uncoated, has been compared with references printed in offset, to see how good they match eachother. Only two Proofing Systems manage to print on reference paper. The other Proofing Systemsuse special paper for digital proof.Measurements and visuell judgement show that the Digital Proofing Systems visualise referencepictures with quite good quality. Proof optimised for coated paper visualise the colours with goodresult. Proof optimised for uncoated paper shows higher quality than the references, which depends onthe surface of the proofing paper. Comparison between reference paper and proofing paper has takenplace as to differences in colour and paper quality.The Digital Proofing Systems are fully automatic, which demand a quite comprised education forcorrect handling. The purchase price and printing costs vary considerably between the ProofingSystems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In dieser Arbeit werden Strukturen beschrieben, die mit Polymeren auf Oberflächen erzeugt wurden. Die Anwendungen reichen von PMMA und PNIPAM Polymerbürsten, über die Restrukturierung von Polystyrol durch Lösemittel bis zu 3D-Strukturen, die aus PAH/ PSS Polyelektrolytmultischichten bestehen. Im ersten Teil werden Polymethylmethacrylat (PMMA) Bürsten in der ionischen Flüssigkeit 1-Butyl-3-Methylimidazolium Hexafluorophospat ([Bmim][PF6]) durch kontrollierte radikalische Polymerisation (ATRP) hergestellt. Kinetische Untersuchungen zeigten ein lineares und dichtes Bürstenwachstum mit einer Wachstumsrate von 4600 g/mol pro nm. Die durchschnittliche Pfropfdichte betrug 0.36 µmol/m2. Als Anwendung wurden Mikrotropfen bestehend aus der ionischen Flüssigkeit, Dimethylformamid und dem ATRP-Katalysator benutzt, um in einer definierten Geometrie Polymerbürsten auf Silizium aufzubringen. Auf diese Weise lässt sich eine bis zu 13 nm dicke Beschichtung erzeugen. Dieses Konzept ist durch die Verdampfung des Monomers Methylmethacrylat (MMA) limitiert. Aus einem 1 µl großen Tropfen aus ionischer Flüssigkeit und MMA (1:1) verdampft MMA innerhalb von 100 s. Daher wurde das Monomer sequentiell zugegeben. Der zweite Teil konzentriert sich auf die Strukturierung von Oberflächen mit Hilfe einer neuen Methode: Tintendruck. Ein piezoelektrisch betriebenes „Drop-on-Demand“ Drucksystem wurde verwendet, um Polystyrol mit 0,4 nl Tropfen aus Toluol zu strukturieren. Die auf diese Art und Weise gebildeten Mikrokrater können Anwendung als Mikrolinsen finden. Die Brennweite der Mikrolinsen kann über die Anzahl an Tropfen, die für die Strukturierung verwendet werden, eingestellt werden. Theoretisch und experimentell wurde die Brennweite im Bereich von 4,5 mm bis 0,21 mm ermittelt. Der zweite Strukturierungsprozess nutzt die Polyelektrolyte Polyvinylamin-Hydrochlorid (PAH) und Polystyrolsulfonat (PSS), um 3D-Strukturen wie z.B. Linien, Schachbretter, Ringe, Stapel mit einer Schicht für Schicht Methode herzustellen. Die Schichtdicke für eine Doppelschicht (DS) liegt im Bereich von 0.6 bis 1.1 nm, wenn NaCl als Elektrolyt mit einer Konzentration von 0,5 mol/l eingesetzt wird. Die Breite der Strukturen beträgt im Mittel 230 µm. Der Prozess wurde erweitert, um Nanomechanische Cantilever Sensoren (NCS) zu beschichten. Auf einem Array bestehend aus acht Cantilevern wurden je zwei Cantilever mit fünf Doppelschichten PAH/ PSS und je zwei Cantilever mit zehn Doppelschichten PAH/ PSS schnell und reproduzierbar beschichtet. Die Massenänderung für die individuellen Cantilever war 0,55 ng für fünf Doppelschichten und 1,08 ng für zehn Doppelschichten. Der daraus resultierende Sensor wurde einer Umgebung mit definierter Luftfeuchtigkeit ausgesetzt. Die Cantilever verbiegen sich durch die Ausdehnung der Beschichtung, da Wasser in das Polymer diffundiert. Eine maximale Verbiegung von 442 nm bei 80% Luftfeuchtigkeit wurde für die mit zehn Doppelschichten beschichteten Cantilever gefunden. Dies entspricht einer Wasseraufnahme von 35%. Zusätzlich konnte aus den Verbiegungsdaten geschlossen werden, dass die Elastizität der Polyelektrolytmultischichten zunimmt, wenn das Polymer gequollen ist. Das thermische Verhalten in Wasser wurde im nächsten Teil an nanomechanischen Cantilever Sensoren, die mit Poly(N-isopropylacrylamid)bürsten (PNIPAM) und plasmapolymerisiertem N,N-Diethylacrylamid beschichtet waren, untersucht. Die Verbiegung des Cantilevers zeigte zwei Bereiche: Bei Temperaturen kleiner der niedrigsten kritischen Temperatur (LCST) ist die Verbiegung durch die Dehydration der Polymerschicht dominiert und bei Temperaturen größer der niedrigsten kritischen Temperatur (LCST) reagiert der Cantilever Sensor überwiegend auf Relaxationsprozesse innerhalb der kollabierten Polymerschicht. Es wurde gefunden, dass das Minimum in der differentiellen Verbiegung mit der niedrigsten kritischen Temperatur von 32°C und 44°C der ausgewählten Polymeren übereinstimmt. Im letzten Teil der Arbeit wurden µ-Reflektivitäts- und µ-GISAXS Experimente eingeführt als neue Methoden, um mikrostrukturierte Proben wie NCS oder PEM Linien mit Röntgenstreuung zu untersuchen. Die Dicke von jedem individuell mit PMMA Bürsten beschichtetem NCS ist im Bereich von 32,9 bis 35,2 nm, was mit Hilfe von µ-Reflektivitätsmessungen bestimmt wurde. Dieses Ergebnis kann mit abbildender Ellipsometrie als komplementäre Methode mit einer maximalen Abweichung von 7% bestätigt werden. Als zweites Beispiel wurde eine gedruckte Polyelektrolytmultischicht aus PAH/PSS untersucht. Die Herstellungsprozedur wurde so modifiziert, dass Goldnanopartikel in die Schichtstruktur eingebracht wurden. Durch Auswertung eines µ-GISAXS Experiments konnte der Einbau der Partikel identifiziert werden. Durch eine Anpassung mit einem Unified Fit Modell wurde herausgefunden, dass die Partikel nicht agglomeriert sind und von einer Polymermatrix umgeben sind.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

While polymers with different functional groups along the backbone have intensively been investigated, there is still a challenge in orthogonal functionalization of the end groups. Such well-defined systems are interesting for the preparation of multiblock (co) polymers or polymer networks, for bio-conjugation or as model systems for examining the end group separation of isolated polymer chains. rnHere, Reversible Addition Fragmentation Chain Transfer (RAFT) polymerization was employed as method to investigate improved techniques for an a, w end group functionalization. RAFT produces polymers terminated in an R group and a dithioester-Z group, where R and Z stem from a suitable chain transfer agent (CTA). rnFor alpha end group functionalization, a CTA with an activated pentafluorophenyl (PFP) ester R group was designed and used for the polymerization of various methacrylate monomers, N-isopropylacrylamide and styrene yielding polymers with a PFP ester as a end group. This allowed the introduction of inert propyl amides, of light responsive diazo compounds, of the dyes NBD, Texas Red, or Oregon Green, of the hormone thyroxin and allowed the formation of multiblocks or peptide conjugates. rnFor w end group functionalization, problems of other techniques were overcome through an aminolysis of the dithioester in the presence of a functional methane thiosulfonate (MTS), yielding functional disulfides. These disulfides were stable under ambient conditions and could be cleaved on demand. Using MTS chemistry, terminal methyl disulfides (enabling self-assembly on planar gold surfaces and ligand substitution on gold and semiconductor nanoparticles), butynyl disulfide end groups (allowing the “clicking” of the polymers onto azide functionalized surfaces and the selective removal through reduction), the bio-target biotin, and the fluorescent dye Texas Red were introduced into polymers. rnThe alpha PFP amidation could be performed under mild conditions, without substantial loss of DTE. This way, a step-wise synthesis produced polymers with two functional end groups in very high yields. rnAs examples, polymers with an anchor group for both gold nanoparticles (AuNP) and CdSe / ZnS semi-conductor nanoparticles (QD) and with a fluorescent dye end group were synthesized. They allowed a NP decoration and enabled an energy transfer from QD to dye or from dye to AuNP. Water-soluble polymers were prepared with two different bio-target end groups, each capable of selectively recognizing and binding a certain protein. The immobilization of protein-polymer-protein layers on planar gold surfaces was monitored by surface plasmon resonance.Introducing two different fluorescent dye end groups enabled an energy transfer between the end groups of isolated polymer chains and created the possibility to monitor the behavior of single polymer chains during a chain collapse. rnThe versatility of the synthetic technique is very promising for applications beyond this work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Environmental factors can determine which group size will maximize the fitness of group members. This is particularly important in cooperative breeders, where group members often serve different purposes. Experimental studies are yet lacking to check whether ecologically mediated need for help will change the propensity of dominant group members to accept immigrants. Here, we manipulated the perceived risk of predation for dominant breeders of the cooperatively breeding cichlid fish Neolamprologus pulcher to test their response to unrelated and previously unknown immigrants. Potential immigrants were more readily accepted if groups were exposed to fish predators or egg predators than to herbivorous fish or control situations lacking predation risk. Our data are consistent with both risk dilution and helping effects. Egg predators were presented before spawning, which might suggest that the fish adjust acceptance rates also to a potential future threat. Dominant group members of N. pulcher apparently consider both present and future need of help based on ecological demand. This suggests that acceptance of immigrants and, more generally, tolerance of group members on demand could be a widespread response to ecological conditions in cooperatively breeding animals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Available on demand as hard copy or computer file from Cornell University Library.