911 resultados para logistics regression
Resumo:
Multiple linear regression model plays a key role in statistical inference and it has extensive applications in business, environmental, physical and social sciences. Multicollinearity has been a considerable problem in multiple regression analysis. When the regressor variables are multicollinear, it becomes difficult to make precise statistical inferences about the regression coefficients. There are some statistical methods that can be used, which are discussed in this thesis are ridge regression, Liu, two parameter biased and LASSO estimators. Firstly, an analytical comparison on the basis of risk was made among ridge, Liu and LASSO estimators under orthonormal regression model. I found that LASSO dominates least squares, ridge and Liu estimators over a significant portion of the parameter space for large dimension. Secondly, a simulation study was conducted to compare performance of ridge, Liu and two parameter biased estimator by their mean squared error criterion. I found that two parameter biased estimator performs better than its corresponding ridge regression estimator. Overall, Liu estimator performs better than both ridge and two parameter biased estimator.
Resumo:
We experimentally demonstrate 7-dB reduction of nonlinearity penalty in 40-Gb/s CO-OFDM at 2000-km using support vector machine regression-based equalization. Simulation in WDM-CO-OFDM shows up to 12-dB enhancement in Q-factor compared to linear equalization.
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.
Resumo:
Quantile regression (QR) was first introduced by Roger Koenker and Gilbert Bassett in 1978. It is robust to outliers which affect least squares estimator on a large scale in linear regression. Instead of modeling mean of the response, QR provides an alternative way to model the relationship between quantiles of the response and covariates. Therefore, QR can be widely used to solve problems in econometrics, environmental sciences and health sciences. Sample size is an important factor in the planning stage of experimental design and observational studies. In ordinary linear regression, sample size may be determined based on either precision analysis or power analysis with closed form formulas. There are also methods that calculate sample size based on precision analysis for QR like C.Jennen-Steinmetz and S.Wellek (2005). A method to estimate sample size for QR based on power analysis was proposed by Shao and Wang (2009). In this paper, a new method is proposed to calculate sample size based on power analysis under hypothesis test of covariate effects. Even though error distribution assumption is not necessary for QR analysis itself, researchers have to make assumptions of error distribution and covariate structure in the planning stage of a study to obtain a reasonable estimate of sample size. In this project, both parametric and nonparametric methods are provided to estimate error distribution. Since the method proposed can be implemented in R, user is able to choose either parametric distribution or nonparametric kernel density estimation for error distribution. User also needs to specify the covariate structure and effect size to carry out sample size and power calculation. The performance of the method proposed is further evaluated using numerical simulation. The results suggest that the sample sizes obtained from our method provide empirical powers that are closed to the nominal power level, for example, 80%.
Resumo:
The design of reverse logistics networks has now emerged as a major issue for manufacturers, not only in developed countries where legislation and societal pressures are strong, but also in developing countries where the adoption of reverse logistics practices may offer a competitive advantage. This paper presents a new model for partner selection for reverse logistic centres in green supply chains. The model offers three advantages. Firstly, it enables economic, environment, and social factors to be considered simultaneously. Secondly, by integrating fuzzy set theory and artificial immune optimization technology, it enables both quantitative and qualitative criteria to be considered simultaneously throughout the whole decision-making process. Thirdly, it extends the flat criteria structure for partner selection evaluation for reverse logistics centres to the more suitable hierarchy structure. The applicability of the model is demonstrated by means of an empirical application based on data from a Chinese electronic equipment and instruments manufacturing company.
Characterising granuloma regression and liver recovery in a murine model of schistosomiasis japonica
Resumo:
For hepatic schistosomiasis the egg-induced granulomatous response and the development of extensive fibrosis are the main pathologies. We used a Schistosoma japonicum-infected mouse model to characterise the multi-cellular pathways associated with the recovery from hepatic fibrosis following clearance of the infection with the anti-schistosomal drug, praziquantel. In the recovering liver splenomegaly, granuloma density and liver fibrosis were all reduced. Inflammatory cell infiltration into the liver was evident, and the numbers of neutrophils, eosinophils and macrophages were significantly decreased. Transcriptomic analysis revealed the up-regulation of fatty acid metabolism genes and the identification of Peroxisome proliferator activated receptor alpha as the upstream regulator of liver recovery. The aryl hydrocarbon receptor signalling pathway which regulates xenobiotic metabolism was also differentially up-regulated. These findings provide a better understanding of the mechanisms associated with the regression of hepatic schistosomiasis.
Resumo:
This paper discusses areas for future research opportunities by addressing accounting issues faced by management accountants practicing in hospitality organizations. Specifically, the article focuses on the use of the uniform system of accounts by operating properties, the usefulness of allocating support costs to operated departments, extending our understanding of operating costs and performance measurement systems and the certification of practicing accountants.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Purpose – In the field of humanitarianism, cluster thinking has been suggested as a solution to the lack of coordinated disaster response. Clusters for diverse functions, including sheltering, logistics and water and sanitation, can be viewed as an effort to achieve functional coordination. The purpose of this paper is to contribute to a greater understanding of the potential of cluster concepts using supply chain coordination and inter‐cluster coordination. The focus is on the conceptual level rather than on specific means of coordination. Design/methodology/approach – The cluster concept in humanitarian relief, along with some key empirical issues, is based on a case study. The concept is then compared to the literature on clusters and coordination in order to develop a theoretical framework with propositions on the tradeoffs between different types of coordination. Findings – The results provide important reflections on one of the major trends in contemporary development of humanitarian logistics. This paper shows that there is a tradeoff between different types of coordination, with horizontal coordination inside cluster drawing attention away from important issues of the supply chain as well as the need to coordinate among the clusters. Research limitations/implications – There is a need for more in‐depth case studies of experiences with clusters in various operations. Various perspectives should be taken into account, including the field, responding agencies, beneficiaries, donors, military and commercial service providers, both during and between disasters. Practical implications – The paper presents the tradeoffs between different types of coordination, in which basic aims such as standardisation through functional coordination, must be balanced with cross‐functional and vertical coordination in order to more successfully serve the users' composite needs. Originality/value – The focus on possible trade‐offs between different types of coordination is an important complement to the literature, which often assumes simultaneous high degrees of horizontal and vertical coordination.
Resumo:
Purpose: There is a need for theory development within the field of humanitarian logistics to understand logistics needs in different stages of a crisis and how to meet these. This paper aims to discuss three dimensions identified in logistics and organization theories and how they relate to three different cases of humanitarian logistics operations - the regional concept of the International Federation of Red Cross Red Crescent Societies, the development and working of the United Nations Joint Logistics Centre and coordination challenges of military logistics in UN mandated peacekeeping operations. The purpose is to build a framework to be used in further studies. Design/methodology/approach: A framework for the study of humanitarian logistics along three dimensions is developed, followed by a discussion of the chosen cases in relation to these dimensions. The framework will be used as basis for the case studies to be undertaken for the purpose of understanding and identification of new questions and needs for other or revised concepts from theory. Findings: The paper shows the relevance of a wide literature to the issues pertinent to humanitarian logistics. There is considerable promise in extant literature on logistics, SCM and coordination, but this needs to be confronted with the particular issues seen in the humanitarian logistics setting to achieve further theory development. Originality/value: The major contribution of the paper lies in its breadth of theoretical perspectives presented and combined in a preliminary theoretical framework. This is applied more specifically in the three case studies described in the paper.
Resumo:
The purpose of this study is to explore the link between decentralization and the impact of natural disasters through empirical analysis. It addresses the issue of the importance of the role of local government in disaster response through different means of decentralization. By studying data available for 50 countries, it allows to develop the knowledge on the role of national government in setting policy that allows flexibility and decision making at a local level and how this devolution of power influences the outcome of disasters. The study uses Aaron Schneider’s definition and rankings of decentralization, the EM-DAT database to identify the amount of people affected by disasters on average per year as well as World Bank Indicators and the Human Development Index (HDI) to model the role of local decentralization in mitigating disasters. With a multivariate regression it looks at the amount of affected people as explained by fiscal, administrative and political decentralization, government expenses, percentage of urbanization, total population, population density, the HDI and the overall Logistics Performance Indicator (LPI). The main results are that total population, the overall LPI and fiscal decentralization are all significant in relation to the amount of people affected by disasters for the countries and period studied. These findings have implication for government’s policies by indicating that fiscal decentralization by allowing local governments to control a bigger proportion of the countries revenues and expenditures plays a role in reducing the amount of affected people in disasters. This can be explained by the fact that local government understand their own needs better in both disaster prevention and response which helps in taking the proper decisions to mitigate the amount of people affected in a disaster. The reduction in the implication of national government might also play a role in reducing the time of reaction to face a disaster. The main conclusion of this study is that fiscal control by local governments can help reduce the amount of people affected by disasters.
Resumo:
This thesis studies, in collaboration with a Finnish logistics service company, gainsharing and the development of a gainsharing models in a logistics outsourcing context. The purpose of the study is to create various gainsharing model variations for the use of a service provider and its customers in order to develop and enhance the customer’s processes and operations, create savings and improve the collaboration between the companies. The study concentrates on offering gainsharing model alternatives for companies operating in internal logistics outsourcing context. Additionally, the prerequisites for the gainsharing arrangement are introduced. In the beginning of the study an extensive literature review is conducted. There are three main themes explored which are the collaboration in an outsourcing context, key account management and gainsharing philosophy. The customer expectations and experiences are gathered by interviewing case company’s employees and its key customers. In order to design the gainsharing model prototypes, customers and other experts’ knowledge and experiences are utilized. The result of this thesis is five gainsharing model variations that are based on the empirical and theoretical data. In addition, the instructions related to each created model are given to the case company, but are not available in this paper