796 resultados para Empirical Algorithm Analysis
Resumo:
Combinatorial optimization problems, are one of the most important types of problems in operational research. Heuristic and metaheuristics algorithms are widely applied to find a good solution. However, a common problem is that these algorithms do not guarantee that the solution will coincide with the optimum and, hence, many solutions to real world OR-problems are afflicted with an uncertainty about the quality of the solution. The main aim of this thesis is to investigate the usability of statistical bounds to evaluate the quality of heuristic solutions applied to large combinatorial problems. The contributions of this thesis are both methodological and empirical. From a methodological point of view, the usefulness of statistical bounds on p-median problems is thoroughly investigated. The statistical bounds have good performance in providing informative quality assessment under appropriate parameter settings. Also, they outperform the commonly used Lagrangian bounds. It is demonstrated that the statistical bounds are shown to be comparable with the deterministic bounds in quadratic assignment problems. As to empirical research, environment pollution has become a worldwide problem, and transportation can cause a great amount of pollution. A new method for calculating and comparing the CO2-emissions of online and brick-and-mortar retailing is proposed. It leads to the conclusion that online retailing has significantly lesser CO2-emissions. Another problem is that the Swedish regional division is under revision and the border effect to public service accessibility is concerned of both residents and politicians. After analysis, it is shown that borders hinder the optimal location of public services and consequently the highest achievable economic and social utility may not be attained.
Resumo:
Background: Previous assessment methods for PG recognition used sensor mechanisms for PG that may cause discomfort. In order to avoid stress of applying wearable sensors, computer vision (CV) based diagnostic systems for PG recognition have been proposed. Main constraints in these methods are the laboratory setup procedures: Novel colored dresses for the patients were specifically designed to segment the test body from a specific colored background. Objective: To develop an image processing tool for home-assessment of Parkinson Gait(PG) by analyzing motion cues extracted during the gait cycles. Methods: The system is based on the idea that a normal body attains equilibrium during the gait by aligning the body posture with the axis of gravity. Due to the rigidity in muscular tone, persons with PD fail to align their bodies with the axis of gravity. The leaned posture of PD patients appears to fall forward. Whereas a normal posture exhibits a constant erect posture throughout the gait. Patients with PD walk with shortened stride angle (less than 15 degrees on average) with high variability in the stride frequency. Whereas a normal gait exhibits a constant stride frequency with an average stride angle of 45 degrees. In order to analyze PG, levodopa-responsive patients and normal controls were videotaped with several gait cycles. First, the test body is segmented in each frame of the gait video based on the pixel contrast from the background to form a silhouette. Next, the center of gravity of this silhouette is calculated. This silhouette is further skeletonized from the video frames to extract the motion cues. Two motion cues were stride frequency based on the cyclic leg motion and the lean frequency based on the angle between the leaned torso tangent and the axis of gravity. The differences in the peaks in stride and lean frequencies between PG and normal gait are calculated using Cosine Similarity measurements. Results: High cosine dissimilarity was observed in the stride and lean frequencies between PG and normal gait. High variations are found in the stride intervals of PG whereas constant stride intervals are found in the normal gait. Conclusions: We propose an algorithm as a source to eliminate laboratory constraints and discomfort during PG analysis. Installing this tool in a home computer with a webcam allows assessment of gait in the home environment.
Resumo:
A customer is presumed to gravitate to a facility by the distance to it and the attractiveness of it. However regarding the location of the facility, the presumption is that the customer opts for the shortest route to the nearest facility.This paradox was recently solved by the introduction of the gravity p-median model. The model is yet to be implemented and tested empirically. We implemented the model in an empirical problem of locating locksmiths, vehicle inspections, and retail stores ofv ehicle spare-parts, and we compared the solutions with those of the p-median model. We found the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.
Resumo:
This paper examines the effects of Ikea store establishment in Kalmar and Karlstad on the trade and retail inside the two cities, and as well on the trade and retail in the close neighboring municipalities and in further peripheral municipalities in both regions. After the establishment of Ikea store, Kalmar and Karlstad have experienced significant growth in trade and retail. The question, however, is how big this growth is in both cities? And how different locations on different distances from Ikea have been affected? What impact there was on different segments of the retail? How different business branches have been affected? How large the catchment area for the emerging new large-scale retail locations is? These questions, in addition to few others, are investigated in this paper. The thesis starts with an introduction chapter containing a background of the topic, problem description, the investigated questions, the purpose, and the outline of the paper. The next chapter includes the frame of reference which consists of literature review and theoretical framework about the external shopping centers and their impact on retail and regional trade development. It includes also information gathered from previous studies technical reports and other available sources about the subject. The third chapter includes description for the methods used to collect the primary and secondary data needed for the purpose of this study. Then the empirical framework which demonstrates the results of the conducted research followed by analysis and concluded in discussion and conclusion. Mixed methods are used as research strategy in this thesis, and the method to conduct the research is based on telephone interviews for the primary (qualitative) data, and documents and desk research for the secondary (quantitative) data. The gathered data is analyzed and designed in a way that allows the usage of comparative analysis technique to present the findings and draw conclusions. The results showed that new established Ikea retail store outside the city boundaries results with many effects on the city center and on the neighboring municipalities as well. The city center seems not to be affected negatively, but on the contrary positive effects were witnessed in both regions, these positive effects are linked to the increase inflow of customers from the external retail area which is known as spillover effect. III On the other hand, the neighboring towns and municipalities are more negatively affected especially with the trade of con-convenience goods as the consumers in these towns and municipalities start to go to the area of Ikea and the large external retail center to do their purchasing, the substitution effect is then said to be occurred. Moreover, the further far municipalities do not seem to be significantly affected by the establishment of Ikea. These effects whether positive or negative could be monitored by looking to few trade parameters such as the turnover, the sales index, and the consumers’ expenditure, these parameters can be very useful to measure the developments and changes in the trade and retail in a given place.
Resumo:
Aim. The aim of this study was to describe, explore and explain the concept of sustainability in nursing. Background. Although researchers in nursing and medicine have emphasised the issue of sustainability and health, the concept of sustainability in nursing is undefined and poorly researched. A need exists for theoretical and empirical studies of sustainability in nursing. Design. Concept analysis as developed by Walker and Avant. Method. Data were derived from dictionaries, international healthcare organisations and literature searches in the CINAHL and MEDLINE databases. Inclusive years for the search ranged from 1990 to 2012. A total of fourteen articles were found that referred to sustainability in nursing. Results. Sustainability in nursing involves six defining attributes: ecology, environment, future, globalism, holism and maintenance. Antecedents of sustainability require climate change, environmental impact and awareness, confidence in the future, responsibility and a willingness to change. Consequences of sustainability in nursing include education in the areas of ecology, environment and sustainable development as well as sustainability as a part of nursing academic programs and in the description of the academic subject of nursing. Sustainability should also be part of national and international healthcare organisations. The concept was clarified herein by giving it a definition. Conclusion. Sustainability in nursing was explored and found to contribute to sustainable development, with the ultimate goal of maintaining an environment that does not harm current and future generations' opportunities for good health. This concept analysis provides recommendations for the healthcare sector to incorporate sustainability and provides recommendations for future research.
Resumo:
In all applications of clone detection it is important to have precise and efficient clone identification algorithms. This paper proposes and outlines a new algorithm, KClone for clone detection that incorporates a novel combination of lexical and local dependence analysis to achieve precision, while retaining speed. The paper also reports on the initial results of a case study using an implementation of KClone with which we have been experimenting. The results indi- cate the ability of KClone to find types-1,2, and 3 clones compared to token-based and PDG-based techniques. The paper also reports results of an initial empirical study of the performance of KClone compared to CCFinderX.
Resumo:
Application of optimization algorithm to PDE modeling groundwater remediation can greatly reduce remediation cost. However, groundwater remediation analysis requires a computational expensive simulation, therefore, effective parallel optimization could potentially greatly reduce computational expense. The optimization algorithm used in this research is Parallel Stochastic radial basis function. This is designed for global optimization of computationally expensive functions with multiple local optima and it does not require derivatives. In each iteration of the algorithm, an RBF is updated based on all the evaluated points in order to approximate expensive function. Then the new RBF surface is used to generate the next set of points, which will be distributed to multiple processors for evaluation. The criteria of selection of next function evaluation points are estimated function value and distance from all the points known. Algorithms created for serial computing are not necessarily efficient in parallel so Parallel Stochastic RBF is different algorithm from its serial ancestor. The application for two Groundwater Superfund Remediation sites, Umatilla Chemical Depot, and Former Blaine Naval Ammunition Depot. In the study, the formulation adopted treats pumping rates as decision variables in order to remove plume of contaminated groundwater. Groundwater flow and contamination transport is simulated with MODFLOW-MT3DMS. For both problems, computation takes a large amount of CPU time, especially for Blaine problem, which requires nearly fifty minutes for a simulation for a single set of decision variables. Thus, efficient algorithm and powerful computing resource are essential in both cases. The results are discussed in terms of parallel computing metrics i.e. speedup and efficiency. We find that with use of up to 24 parallel processors, the results of the parallel Stochastic RBF algorithm are excellent with speed up efficiencies close to or exceeding 100%.
Resumo:
Existing distributed hydrologic models are complex and computationally demanding for using as a rapid-forecasting policy-decision tool, or even as a class-room educational tool. In addition, platform dependence, specific input/output data structures and non-dynamic data-interaction with pluggable software components inside the existing proprietary frameworks make these models restrictive only to the specialized user groups. RWater is a web-based hydrologic analysis and modeling framework that utilizes the commonly used R software within the HUBzero cyber infrastructure of Purdue University. RWater is designed as an integrated framework for distributed hydrologic simulation, along with subsequent parameter optimization and visualization schemes. RWater provides platform independent web-based interface, flexible data integration capacity, grid-based simulations, and user-extensibility. RWater uses RStudio to simulate hydrologic processes on raster based data obtained through conventional GIS pre-processing. The program integrates Shuffled Complex Evolution (SCE) algorithm for parameter optimization. Moreover, RWater enables users to produce different descriptive statistics and visualization of the outputs at different temporal resolutions. The applicability of RWater will be demonstrated by application on two watersheds in Indiana for multiple rainfall events.
Resumo:
After more than forty years studying growth, there are two classes of growth models that have emerged: exogenous and endogenous growth models. Since both try to mimic the same set of long-run stylized facts, they are observationally equivalent in some respects. Our goals in this paper are twofold First, we discuss the time-series properties of growth models in a way that is useful for assessing their fit to the data. Second, we investigate whether these two models successfully conforms to U.S. post-war data. We use cointegration techniques to estimate and test long-run capital elasticities, exogeneity tests to investigate the exogeneity status of TFP, and Granger-causality tests to examine temporal precedence of TFP with respect to infrastructure expenditures. The empirical evidence is robust in confirming the existence of a unity long-run capital elasticity. The analysis of TFP reveals that it is not weakly exogenous in the exogenous growth model Granger-causality test results show unequivocally that there is no evidence that TFP for both models precede infrastructure expenditures not being preceded by it. On the contrary, we find some evidence that infras- tructure investment precedes TFP. Our estimated impact of infrastructure on TFP lay rougbly in the interval (0.19, 0.27).
Resumo:
We compare three frequently used volatility modelling techniques: GARCH, Markovian switching and cumulative daily volatility models. Our primary goal is to highlight a practical and systematic way to measure the relative effectiveness of these techniques. Evaluation comprises the analysis of the validity of the statistical requirements of the various models and their performance in simple options hedging strategies. The latter puts them to test in a "real life" application. Though there was not much difference between the three techniques, a tendency in favour of the cumulative daily volatility estimates, based on tick data, seems dear. As the improvement is not very big, the message for the practitioner - out of the restricted evidence of our experiment - is that he will probably not be losing much if working with the Markovian switching method. This highlights that, in terms of volatility estimation, no clear winner exists among the more sophisticated techniques.
Resumo:
Empirical evidence suggests that real exchange rate is characterized by the presence of near-unity and additive outliers. Recent studeis have found evidence on favor PPP reversion by using the quasi-differencing (Elliott et al., 1996) unit root tests (ERS), which is more efficient against local alternatives but is still based on least squares estimation. Unit root tests basead on least saquares method usually tend to bias inference towards stationarity when additive out liers are present. In this paper, we incorporate quasi-differencing into M-estimation to construct a unit root test that is robust not only against near-unity root but also against nonGaussian behavior provoked by assitive outliers. We re-visit the PPP hypothesis and found less evidemce in favor PPP reversion when non-Gaussian behavior in real exchange rates is taken into account.
Resumo:
This manuscript empirically assesses the effects of political institutions on economic growth. It analyzes how political institutions affect economic growth in different stages of democratization and economic development by means of dynamic panel estimation with interaction terms. The new empirical results obtained show that political institutions work as a substitute for democracy promoting economic growth. In other words, political institutions are important for increasing economic growth, mainly when democracy is not consolidated. Moreover, political institutions are extremely relevant to economic outcomes in periods of transition to democracy and in poor countries with high ethnical fractionalization.
Resumo:
Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.
Resumo:
Neste trabalho investigamos as propriedades em pequena amostra e a robustez das estimativas dos parâmetros de modelos DSGE. Tomamos o modelo de Smets and Wouters (2007) como base e avaliamos a performance de dois procedimentos de estimação: Método dos Momentos Simulados (MMS) e Máxima Verossimilhança (MV). Examinamos a distribuição empírica das estimativas dos parâmetros e sua implicação para as análises de impulso-resposta e decomposição de variância nos casos de especificação correta e má especificação. Nossos resultados apontam para um desempenho ruim de MMS e alguns padrões de viés nas análises de impulso-resposta e decomposição de variância com estimativas de MV nos casos de má especificação considerados.
Resumo:
Empirical evidence shows that larger firms pay higher wages than smaller ones. This wage premium is called the firm size wage effect. The firm size effect on wages may be attributed to many factors, as differentials on productivity, efficiency wage, to prevent union formation, or rent sharing. The present study uses quantile regression to investigate the finn size wage effect. By offering insight into who benefits from the wage premi um, quantile regression helps eliminate and refine possible explanations. Estimated results are consistent with the hypothesis that the higher wages paid by large firms can be explained by the difference in monitoring costs that large firms face. Results also suggest that more highly skilled workers are more often found at larger firms .