902 resultados para two stage quantile regression


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate seasonal to interannual streamflow forecasts based on climate information are critical for optimal management and operation of water resources systems. Considering most water supply systems are multipurpose, operating these systems to meet increasing demand under the growing stresses of climate variability and climate change, population and economic growth, and environmental concerns could be very challenging. This study was to investigate improvement in water resources systems management through the use of seasonal climate forecasts. Hydrological persistence (streamflow and precipitation) and large-scale recurrent oceanic-atmospheric patterns such as the El Niño/Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), the Atlantic Multidecadal Oscillation (AMO), the Pacific North American (PNA), and customized sea surface temperature (SST) indices were investigated for their potential to improve streamflow forecast accuracy and increase forecast lead-time in a river basin in central Texas. First, an ordinal polytomous logistic regression approach is proposed as a means of incorporating multiple predictor variables into a probabilistic forecast model. Forecast performance is assessed through a cross-validation procedure, using distributions-oriented metrics, and implications for decision making are discussed. Results indicate that, of the predictors evaluated, only hydrologic persistence and Pacific Ocean sea surface temperature patterns associated with ENSO and PDO provide forecasts which are statistically better than climatology. Secondly, a class of data mining techniques, known as tree-structured models, is investigated to address the nonlinear dynamics of climate teleconnections and screen promising probabilistic streamflow forecast models for river-reservoir systems. Results show that the tree-structured models can effectively capture the nonlinear features hidden in the data. Skill scores of probabilistic forecasts generated by both classification trees and logistic regression trees indicate that seasonal inflows throughout the system can be predicted with sufficient accuracy to improve water management, especially in the winter and spring seasons in central Texas. Lastly, a simplified two-stage stochastic economic-optimization model was proposed to investigate improvement in water use efficiency and the potential value of using seasonal forecasts, under the assumption of optimal decision making under uncertainty. Model results demonstrate that incorporating the probabilistic inflow forecasts into the optimization model can provide a significant improvement in seasonal water contract benefits over climatology, with lower average deficits (increased reliability) for a given average contract amount, or improved mean contract benefits for a given level of reliability compared to climatology. The results also illustrate the trade-off between the expected contract amount and reliability, i.e., larger contracts can be signed at greater risk.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The desire to promote efficient allocation of health resources and effective patient care has focused attention on home care as an alternative to acute hospital service. in particular, clinical home care is suggested as a substitute for the final days of hospital stay. This dissertation evaluates the relationship between hospital and home care services for residents of British Columbia, Canada beginning in 1993/94 using data from the British Columbia Linked Health database. ^ Lengths of stay for patients referred to home care following hospital discharge are compared to those for patients not referred to home care. Ordinary least squares regression analysis adjusts for age, gender, admission severity, comorbidity, complications, income, and other patient, physician, and hospital characteristics. Home care clients tend to have longer stays in hospital than patients not referred to home care (β = 2.54, p = 0.0001). Longer hospital stays are evident for all home care client groups as well as both older and younger patients. Sensitivity analysis for referral time to direct care and extreme lengths of stay are consistent with these findings. Two stage regression analysis indicates that selection bias is not significant.^ Patients referred to clinical home care also have different health service utilization following discharge compared to patients not referred to home care. Home care nursing clients use more medical services to complement home care. Rehabilitation clients initially substitute home care for physiotherapy services but later are more likely to be admitted to residential care. All home care clients are more likely to be readmitted to hospital during the one year follow-up period. There is also a strong complementary association between direct care referral and homemaker support. Rehabilitation clients have a greater risk of dying during the year following discharge. ^ These results suggest that home care is currently used as a complement rather than a substitute for some acute health services. Organizational and resource issues may contribute to the longer stays by home care clients. Program planning and policies are required if home care is to provide an effective substitute for acute hospital days. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND High-risk prostate cancer (PCa) is an extremely heterogeneous disease. A clear definition of prognostic subgroups is mandatory. OBJECTIVE To develop a pretreatment prognostic model for PCa-specific survival (PCSS) in high-risk PCa based on combinations of unfavorable risk factors. DESIGN, SETTING, AND PARTICIPANTS We conducted a retrospective multicenter cohort study including 1360 consecutive patients with high-risk PCa treated at eight European high-volume centers. INTERVENTION Retropubic radical prostatectomy with pelvic lymphadenectomy. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Two Cox multivariable regression models were constructed to predict PCSS as a function of dichotomization of clinical stage (< cT3 vs cT3-4), Gleason score (GS) (2-7 vs 8-10), and prostate-specific antigen (PSA; ≤ 20 ng/ml vs > 20 ng/ml). The first "extended" model includes all seven possible combinations; the second "simplified" model includes three subgroups: a good prognosis subgroup (one single high-risk factor); an intermediate prognosis subgroup (PSA >20 ng/ml and stage cT3-4); and a poor prognosis subgroup (GS 8-10 in combination with at least one other high-risk factor). The predictive accuracy of the models was summarized and compared. Survival estimates and clinical and pathologic outcomes were compared between the three subgroups. RESULTS AND LIMITATIONS The simplified model yielded an R(2) of 33% with a 5-yr area under the curve (AUC) of 0.70 with no significant loss of predictive accuracy compared with the extended model (R(2): 34%; AUC: 0.71). The 5- and 10-yr PCSS rates were 98.7% and 95.4%, 96.5% and 88.3%, 88.8% and 79.7%, for the good, intermediate, and poor prognosis subgroups, respectively (p = 0.0003). Overall survival, clinical progression-free survival, and histopathologic outcomes significantly worsened in a stepwise fashion from the good to the poor prognosis subgroups. Limitations of the study are the retrospective design and the long study period. CONCLUSIONS This study presents an intuitive and easy-to-use stratification of high-risk PCa into three prognostic subgroups. The model is useful for counseling and decision making in the pretreatment setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Obesity and diets rich in uric acid-raising components appear to account for the increased prevalence of hyperuricemia in Westernized populations. Prevalence rates of hypertension, diabetes mellitus, CKD, and cardiovascular disease are also increasing. We used Mendelian randomization to examine whether uric acid is an independent and causal cardiovascular risk factor. Serum uric acid was measured in 3315 patients of the Ludwigshafen Risk and Cardiovascular Health Study. We calculated a weighted genetic risk score (GRS) for uric acid concentration based on eight uric acid-regulating single nucleotide polymorphisms. Causal odds ratios and causal hazard ratios (HRs) were calculated using a two-stage regression estimate with the GRS as the instrumental variable to examine associations with cardiometabolic phenotypes (cross-sectional) and mortality (prospectively) by logistic regression and Cox regression, respectively. Our GRS was not consistently associated with any biochemical marker except for uric acid, arguing against pleiotropy. Uric acid was associated with a range of prevalent diseases, including coronary artery disease. Uric acid and the GRS were both associated with cardiovascular death and sudden cardiac death. In a multivariate model adjusted for factors including medication, causal HRs corresponding to each 1-mg/dl increase in genetically predicted uric acid concentration were significant for cardiovascular death (HR, 1.77; 95% confidence interval, 1.12 to 2.81) and sudden cardiac death (HR, 2.41; 95% confidence interval, 1.16 to 5.00). These results suggest that high uric acid is causally related to adverse cardiovascular outcomes, especially sudden cardiac death.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many phase II clinical studies in oncology use two-stage frequentist design such as Simon's optimal design. However, they have a common logistical problem regarding the patient accrual at the interim. Strictly speaking, patient accrual at the end of the first stage may have to be suspended until all patients have events, success or failure. For example, when the study endpoint is six-month progression free survival, patient accrual has to be stopped until all outcomes from stage I is observed. However, study investigators may have concern when accrual is suspended after the first stage due to the loss of accrual momentum during this hiatus. We propose a two-stage phase II design that resolves the patient accrual problem due to an interim analysis, and it can be used as an alternative way to frequentist two-stage phase II studies in oncology. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent hurricanes of Katrina, Rita, and Dolly have brought to light the precarious situation populations place themselves in when they are unprepared to face a storm, or do not follow official orders to evacuate when a destructive hurricane is poised to hit the area. Three counties in southern Texas lie within 60 miles of the Gulf of Mexico, and along the Mexican border. Determining the barriers to hurricane evacuation in this distinct and highly impoverished area of the United States would help aid local, state, and federal agencies to respond more effectively to persons living here.^ The aim of this study was to examine intention to comply with mandatory hurricane evacuation orders among persons living in three counties in South Texas by gender, income, education, acculturation and county of residence. A questionnaire was administered to 3,088 households across the three counties using a two-stage cluster sampling strategy, stratified by all three counties. The door-to-door survey was a 73-item instrument that included demographics, reasons for and against evacuation, and preparedness for a hurricane. Weighted data were used for the analyses.^ Chi-square tests were run to determine whether differences between observed and expected frequencies were statistically significant. A logistic regression model was developed based on that univariate analysis. Results from the logistic regression estimated odds ratios and their 95 percent confidence intervals for the independent variables.^ Logistic regression results indicate that females were less likely than men to follow an evacuation order. Having a higher education meant more likelihood of evacuating. Those respondents with a higher affiliation with Spanish than English were more likely to follow the evacuation orders. Hidalgo County residents were less likely to evacuate than Cameron or Willacy Counties' residents. Local officials need to implement communication efforts specifically tailored for females, residents with less of an affiliation with Spanish, and Hidalgo County residents to ensure their successful evacuation prior to a strong hurricane's landfall.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to examine, in the context of an economic model of health production, the relationship between inputs (health influencing activities) and fitness.^ Primary data were collected from 204 employees of a large insurance company at the time of their enrollment in an industrially-based health promotion program. The inputs of production included medical care use, exercise, smoking, drinking, eating, coronary disease history, and obesity. The variables of age, gender and education known to affect the production process were also examined. Two estimates of fitness were used; self-report and a physiologic estimate based on exercise treadmill performance. Ordinary least squares and two-stage least squares regression analyses were used to estimate the fitness production functions.^ In the production of self-reported fitness status the coefficients for the exercise, smoking, eating, and drinking production inputs, and the control variable of gender were statistically significant and possessed theoretically correct signs. In the production of physiologic fitness exercise, smoking and gender were statistically significant. Exercise and gender were theoretically consistent while smoking was not. Results are compared with previous analyses of health production. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis project is motivated by the potential problem of using observational data to draw inferences about a causal relationship in observational epidemiology research when controlled randomization is not applicable. Instrumental variable (IV) method is one of the statistical tools to overcome this problem. Mendelian randomization study uses genetic variants as IVs in genetic association study. In this thesis, the IV method, as well as standard logistic and linear regression models, is used to investigate the causal association between risk of pancreatic cancer and the circulating levels of soluble receptor for advanced glycation end-products (sRAGE). Higher levels of serum sRAGE were found to be associated with a lower risk of pancreatic cancer in a previous observational study (255 cases and 485 controls). However, such a novel association may be biased by unknown confounding factors. In a case-control study, we aimed to use the IV approach to confirm or refute this observation in a subset of study subjects for whom the genotyping data were available (178 cases and 177 controls). Two-stage IV method using generalized method of moments-structural mean models (GMM-SMM) was conducted and the relative risk (RR) was calculated. In the first stage analysis, we found that the single nucleotide polymorphism (SNP) rs2070600 of the receptor for advanced glycation end-products (AGER) gene meets all three general assumptions for a genetic IV in examining the causal association between sRAGE and risk of pancreatic cancer. The variant allele of SNP rs2070600 of the AGER gene was associated with lower levels of sRAGE, and it was neither associated with risk of pancreatic cancer, nor with the confounding factors. It was a potential strong IV (F statistic = 29.2). However, in the second stage analysis, the GMM-SMM model failed to converge due to non- concaveness probably because of the small sample size. Therefore, the IV analysis could not support the causality of the association between serum sRAGE levels and risk of pancreatic cancer. Nevertheless, these analyses suggest that rs2070600 was a potentially good genetic IV for testing the causality between the risk of pancreatic cancer and sRAGE levels. A larger sample size is required to conduct a credible IV analysis.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Forecasting the AC power output of a PV plant accurately is important both for plant owners and electric system operators. Two main categories of PV modeling are available: the parametric and the nonparametric. In this paper, a methodology using a nonparametric PV model is proposed, using as inputs several forecasts of meteorological variables from a Numerical Weather Forecast model, and actual AC power measurements of PV plants. The methodology was built upon the R environment and uses Quantile Regression Forests as machine learning tool to forecast AC power with a confidence interval. Real data from five PV plants was used to validate the methodology, and results show that daily production is predicted with an absolute cvMBE lower than 1.3%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A two-component mixture regression model that allows simultaneously for heterogeneity and dependency among observations is proposed. By specifying random effects explicitly in the linear predictor of the mixture probability and the mixture components, parameter estimation is achieved by maximising the corresponding best linear unbiased prediction type log-likelihood. Approximate residual maximum likelihood estimates are obtained via an EM algorithm in the manner of generalised linear mixed model (GLMM). The method can be extended to a g-component mixture regression model with the component density from the exponential family, leading to the development of the class of finite mixture GLMM. For illustration, the method is applied to analyse neonatal length of stay (LOS). It is shown that identification of pertinent factors that influence hospital LOS can provide important information for health care planning and resource allocation. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Correlation and regression are two of the statistical procedures most widely used by optometrists. However, these tests are often misused or interpreted incorrectly, leading to erroneous conclusions from clinical experiments. This review examines the major statistical tests concerned with correlation and regression that are most likely to arise in clinical investigations in optometry. First, the use, interpretation and limitations of Pearson's product moment correlation coefficient are described. Second, the least squares method of fitting a linear regression to data and for testing how well a regression line fits the data are described. Third, the problems of using linear regression methods in observational studies, if there are errors associated in measuring the independent variable and for predicting a new value of Y for a given X, are discussed. Finally, methods for testing whether a non-linear relationship provides a better fit to the data and for comparing two or more regression lines are considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National guidance and clinical guidelines recommended multidisciplinary teams (MDTs) for cancer services in order to bring specialists in relevant disciplines together, ensure clinical decisions are fully informed, and to coordinate care effectively. However, the effectiveness of cancer teams was not previously evaluated systematically. A random sample of 72 breast cancer teams in England was studied (548 members in six core disciplines), stratified by region and caseload. Information about team constitution, processes, effectiveness, clinical performance, and members' mental well-being was gathered using appropriate instruments. Two input variables, team workload (P=0.009) and the proportion of breast care nurses (P=0.003), positively predicted overall clinical performance in multivariate analysis using a two-stage regression model. There were significant correlations between individual team inputs, team composition variables, and clinical performance. Some disciplines consistently perceived their team's effectiveness differently from the mean. Teams with shared leadership of their clinical decision-making were most effective. The mental well-being of team members appeared significantly better than in previous studies of cancer clinicians, the NHS, and the general population. This study established that team composition, working methods, and workloads are related to measures of effectiveness, including the quality of clinical care. © 2003 Cancer Research UK.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our understanding of early spatial vision owes much to contrast masking and summation paradigms. In particular, the deep region of facilitation at low mask contrasts is thought to indicate a rapidly accelerating contrast transducer (eg a square-law or greater). In experiment 1, we tapped an early stage of this process by measuring monocular and binocular thresholds for patches of 1 cycle deg-1 sine-wave grating. Threshold ratios were around 1.7, implying a nearly linear transducer with an exponent around 1.3. With this form of transducer, two previous models (Legge, 1984 Vision Research 24 385 - 394; Meese et al, 2004 Perception 33 Supplement, 41) failed to fit the monocular, binocular, and dichoptic masking functions measured in experiment 2. However, a new model with two-stages of divisive gain control fits the data very well. Stage 1 incorporates nearly linear monocular transducers (to account for the high level of binocular summation and slight dichoptic facilitation), and monocular and interocular suppression (to fit the profound 42 Oral presentations: Spatial vision Thursday dichoptic masking). Stage 2 incorporates steeply accelerating transduction (to fit the deep regions of monocular and binocular facilitation), and binocular summation and suppression (to fit the monocular and binocular masking). With all model parameters fixed from the discrimination thresholds, we examined the slopes of the psychometric functions. The monocular and binocular slopes were steep (Weibull ߘ3-4) at very low mask contrasts and shallow (ߘ1.2) at all higher contrasts, as predicted by all three models. The dichoptic slopes were steep (ߘ3-4) at very low contrasts, and very steep (ß>5.5) at high contrasts (confirming Meese et al, loco cit.). A crucial new result was that intermediate dichoptic mask contrasts produced shallow slopes (ߘ2). Only the two-stage model predicted the observed pattern of slope variation, so providing good empirical support for a two-stage process of binocular contrast transduction. [Supported by EPSRC GR/S74515/01]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airline industry is at the forefront of many technological developments and is often a pioneer in adopting such innovations in a large scale. It needs to improve its efficiency as the current trends for input prices and competitive pressures show that any airline will face increasingly challenging market conditions. This paper has focused on the relationship between ICT investments and efficiency in the airline industry and employed a two-stage analytical investigation, DEA, SFA and the Tobit regression model. In this study, we first estimate the productivity of the airline industry using a balanced panel of 17 airlines over the period 1999–2004 by the Data Envelop Analysis (DEA) and the Stochastic Frontier Analysis (SFA) methods. We then evaluate the impacts of the determinants of productivity in the industry concentrating on ICT. The results suggest that regardless of all the negative shocks to the airline industry during the sample period, ICT had a positive effect on productivity during 1999-2004.