14 resultados para Error-location numbers
em Dalarna University College Electronic Archive
Resumo:
The aim of this study is to evaluate the variation of solar radiation data between different data sources that will be free and available at the Solar Energy Research Center (SERC). The comparison between data sources will be carried out for two locations: Stockholm, Sweden and Athens, Greece. For the desired locations, data is gathered for different tilt angles: 0°, 30°, 45°, 60° facing south. The full dataset is available in two excel files: “Stockholm annual irradiation” and “Athens annual irradiation”. The World Radiation Data Center (WRDC) is defined as a reference for the comparison with other dtaasets, because it has the highest time span recorded for Stockholm (1964–2010) and Athens (1964–1986), in form of average monthly irradiation, expressed in kWh/m2. The indicator defined for the data comparison is the estimated standard deviation. The mean biased error (MBE) and the root mean square error (RMSE) were also used as statistical indicators for the horizontal solar irradiation data. The variation in solar irradiation data is categorized in two categories: natural or inter-annual variability, due to different data sources and lastly due to different calculation models. The inter-annual variation for Stockholm is 140.4kWh/m2 or 14.4% and 124.3kWh/m2 or 8.0% for Athens. The estimated deviation for horizontal solar irradiation is 3.7% for Stockholm and 4.4% Athens. This estimated deviation is respectively equal to 4.5% and 3.6% for Stockholm and Athens at 30° tilt, 5.2% and 4.5% at 45° tilt, 5.9% and 7.0% at 60°. NASA’s SSE, SAM and RETScreen (respectively Satel-light) exhibited the highest deviation from WRDC’s data for Stockholm (respectively Athens). The essential source for variation is notably the difference in horizontal solar irradiation. The variation increases by 1-2% per degree of tilt, using different calculation models, as used in PVSYST and Meteonorm. The location and altitude of the data source did not directly influence the variation with the WRDC data. Further examination is suggested in order to improve the methodology of selecting the location; Examining the functional dependence of ground reflected radiation with ambient temperature; variation of ambient temperature and its impact on different solar energy systems; Im pact of variation in solar irradiation and ambient temperature on system output.
Resumo:
This report presents a new way of control engineering. Dc motor speed controlled by three controllers PID, pole placement and Fuzzy controller and discusses the advantages and disadvantages of each controller for different conditions under loaded and unloaded scenarios using software Matlab. The brushless series wound Dc motor is very popular in industrial application and control systems because of the high torque density, high efficiency and small size. First suitable equations are developed for DC motor. PID controller is developed and tuned in order to get faster step response. The simulation results of PID controller provide very good results and the controller is further tuned in order to decrease its overshoot error which is common in PID controllers. Further it is purposed that in industrial environment these controllers are better than others controllers as PID controllers are easy to tuned and cheap. Pole placement controller is the best example of control engineering. An addition of integrator reduced the noise disturbances in pole placement controller and this makes it a good choice for industrial applications. The fuzzy controller is introduce with a DC chopper to make the DC motor speed control smooth and almost no steady state error is observed. Another advantage is achieved in fuzzy controller that the simulations of three different controllers are compared and concluded from the results that Fuzzy controller outperforms to PID controller in terms of steady state error and smooth step response. While Pole placement controller have no comparison in terms of controls because designer can change the step response according to nature of control systems, so this controller provide wide range of control over a system. Poles location change the step response in a sense that if poles are near to origin then step response of motor is fast. Finally a GUI of these three controllers are developed which allow the user to select any controller and change its parameters according to the situation.
Resumo:
Solutions to combinatorial optimization problems, such as problems of locating facilities, frequently rely on heuristics to minimize the objective function. The optimum is sought iteratively and a criterion is needed to decide when the procedure (almost) attains it. Pre-setting the number of iterations dominates in OR applications, which implies that the quality of the solution cannot be ascertained. A small, almost dormant, branch of the literature suggests using statistical principles to estimate the minimum and its bounds as a tool to decide upon stopping and evaluating the quality of the solution. In this paper we examine the functioning of statistical bounds obtained from four different estimators by using simulated annealing on p-median test problems taken from Beasley’s OR-library. We find the Weibull estimator and the 2nd order Jackknife estimator preferable and the requirement of sample size to be about 10 being much less than the current recommendation. However, reliable statistical bounds are found to depend critically on a sample of heuristic solutions of high quality and we give a simple statistic useful for checking the quality. We end the paper with an illustration on using statistical bounds in a problem of locating some 70 distribution centers of the Swedish Post in one Swedish region.
Resumo:
Solutions to combinatorial optimization, such as p-median problems of locating facilities, frequently rely on heuristics to minimize the objective function. The minimum is sought iteratively and a criterion is needed to decide when the procedure (almost) attains it. However, pre-setting the number of iterations dominates in OR applications, which implies that the quality of the solution cannot be ascertained. A small branch of the literature suggests using statistical principles to estimate the minimum and use the estimate for either stopping or evaluating the quality of the solution. In this paper we use test-problems taken from Baesley's OR-library and apply Simulated Annealing on these p-median problems. We do this for the purpose of comparing suggested methods of minimum estimation and, eventually, provide a recommendation for practioners. An illustration ends the paper being a problem of locating some 70 distribution centers of the Swedish Post in a region.
Resumo:
Combinatorial optimization problems, are one of the most important types of problems in operational research. Heuristic and metaheuristics algorithms are widely applied to find a good solution. However, a common problem is that these algorithms do not guarantee that the solution will coincide with the optimum and, hence, many solutions to real world OR-problems are afflicted with an uncertainty about the quality of the solution. The main aim of this thesis is to investigate the usability of statistical bounds to evaluate the quality of heuristic solutions applied to large combinatorial problems. The contributions of this thesis are both methodological and empirical. From a methodological point of view, the usefulness of statistical bounds on p-median problems is thoroughly investigated. The statistical bounds have good performance in providing informative quality assessment under appropriate parameter settings. Also, they outperform the commonly used Lagrangian bounds. It is demonstrated that the statistical bounds are shown to be comparable with the deterministic bounds in quadratic assignment problems. As to empirical research, environment pollution has become a worldwide problem, and transportation can cause a great amount of pollution. A new method for calculating and comparing the CO2-emissions of online and brick-and-mortar retailing is proposed. It leads to the conclusion that online retailing has significantly lesser CO2-emissions. Another problem is that the Swedish regional division is under revision and the border effect to public service accessibility is concerned of both residents and politicians. After analysis, it is shown that borders hinder the optimal location of public services and consequently the highest achievable economic and social utility may not be attained.
Resumo:
The rapid development of data transfer through internet made it easier to send the data accurate and faster to the destination. There are many transmission media to transfer the data to destination like e-mails; at the same time it is may be easier to modify and misuse the valuable information through hacking. So, in order to transfer the data securely to the destination without any modifications, there are many approaches like cryptography and steganography. This paper deals with the image steganography as well as with the different security issues, general overview of cryptography, steganography and digital watermarking approaches. The problem of copyright violation of multimedia data has increased due to the enormous growth of computer networks that provides fast and error free transmission of any unauthorized duplicate and possibly manipulated copy of multimedia information. In order to be effective for copyright protection, digital watermark must be robust which are difficult to remove from the object in which they are embedded despite a variety of possible attacks. The message to be send safe and secure, we use watermarking. We use invisible watermarking to embed the message using LSB (Least Significant Bit) steganographic technique. The standard LSB technique embed the message in every pixel, but my contribution for this proposed watermarking, works with the hint for embedding the message only on the image edges alone. If the hacker knows that the system uses LSB technique also, it cannot decrypt correct message. To make my system robust and secure, we added cryptography algorithm as Vigenere square. Whereas the message is transmitted in cipher text and its added advantage to the proposed system. The standard Vigenere square algorithm works with either lower case or upper case. The proposed cryptography algorithm is Vigenere square with extension of numbers also. We can keep the crypto key with combination of characters and numbers. So by using these modifications and updating in this existing algorithm and combination of cryptography and steganography method we develop a secure and strong watermarking method. Performance of this watermarking scheme has been analyzed by evaluating the robustness of the algorithm with PSNR (Peak Signal to Noise Ratio) and MSE (Mean Square Error) against the quality of the image for large amount of data. While coming to see results of the proposed encryption, higher value of 89dB of PSNR with small value of MSE is 0.0017. Then it seems the proposed watermarking system is secure and robust for hiding secure information in any digital system, because this system collect the properties of both steganography and cryptography sciences.
Resumo:
Wholesale trade has an intermediate position between manufacturing and retail in the distributional channel. In modern economies, consumers buy few, if any, products directly from manufacture or producer. Instead, it is a wholesaler, who is in direct contact with producers, buying goods in larger quantities and selling them in smaller quantities to retailers. Traditionally, the main function of a wholesaler has been to push goods along the distributional channel from producer to retailer, or other nonend user. However, the function of wholesalers usually goes beyond the process of the physical distribution of goods. Wholesalers also arrange storage, perform market analyses, promote trade or provide technical support to consumers (Riemers 1998). The existence of wholesalers (and other intermediaries) in the distributional channel is based on the effective and efficient performance of distribution services, that are needed by producers and other members of the supply chain. Producers usually do not enjoy the economies of scale that they have in production, when it comes to providing distributional services (Rosenbloom 2007) and this creates a space for wholesalers or other intermediaries. Even though recent developments in the distributional channel indicate that traditional wholesaling activities now also compete with other supply chain organizations, wholesaling still remains an important activity in many economies (Quinn and Sparks, 2007). In 2010, the Swedish wholesale trade sector consisted of approximately 46.000 firms and generated an annual turnover of 1 300 billion SEK (Företagsstatistiken, Statistics Sweden). In terms of turnover, wholesaling accounts for 20% of the gross domestic product and is thereby the third largest industry. This is behind manufacturing and a composite group of firms in other sectors of the service industry but ahead of retailing. This indicates that the wholesale trade sector is an important part of the Swedish economy. The position of wholesaling is further reinforced when measuring productivity growth. Measured in terms of value added per employee, wholesaling experienced the largest productivity growth of all industries in the Swedish economy during the years 2000 through 2010. The fact that wholesale trade is one of the important parts of a modern economy, and the positive development of the Swedish wholesale trade sector in recent decades, leads to several questions related to industry dynamics. The three topics that will be examined in this thesis are firm entry, firm relocation and firm growth. The main question to be answered by this thesis is what factors influence new firm formation, firm relocation and firm growth in the Swedish wholesale trade sector?
Resumo:
Location Models are usedfor planning the location of multiple service centers in order to serve a geographicallydistributed population. A cornerstone of such models is the measure of distancebetween the service center and a set of demand points, viz, the location of thepopulation (customers, pupils, patients and so on). Theoretical as well asempirical evidence support the current practice of using the Euclidian distancein metropolitan areas. In this paper, we argue and provide empirical evidencethat such a measure is misleading once the Location Models are applied to ruralareas with heterogeneous transport networks. This paper stems from the problemof finding an optimal allocation of a pre-specified number of hospitals in alarge Swedish region with a low population density. We conclude that the Euclidianand the network distances based on a homogenous network (equal travel costs inthe whole network) give approximately the same optimums. However networkdistances calculated from a heterogeneous network (different travel costs indifferent parts of the network) give widely different optimums when the numberof hospitals increases. In terms ofaccessibility we find that the recent closure of hospitals and the in-optimallocation of the remaining ones has increased the average travel distance by 75%for the population. Finally, aggregation the population misplaces the hospitalsby on average 10 km.
Resumo:
In this paper, the p-median model is used to find the location of retail stores that minimizes CO2 emissions from consumer travel. The optimal location is then compared with the existing retail location,and the excess CO2 emissions compared with the optimal solution is calculated. The results show that by using the environmentally optimal location, CO2 emissions from consumer travel could be reduced by approximately 25percent.
Resumo:
The p-median problem is often used to locate p service centers by minimizing their distances to a geographically distributed demand (n). The optimal locations are sensitive to geographical context such as road network and demand points especially when they are asymmetrically distributed in the plane. Most studies focus on evaluating performances of the p-median model when p and n vary. To our knowledge this is not a very well-studied problem when the road network is alternated especially when it is applied in a real world context. The aim in this study is to analyze how the optimal location solutions vary, using the p-median model, when the density in the road network is alternated. The investigation is conducted by the means of a case study in a region in Sweden with an asymmetrically distributed population (15,000 weighted demand points), Dalecarlia. To locate 5 to 50 service centers we use the national transport administrations official road network (NVDB). The road network consists of 1.5 million nodes. To find the optimal location we start with 500 candidate nodes in the network and increase the number of candidate nodes in steps up to 67,000. To find the optimal solution we use a simulated annealing algorithm with adaptive tuning of the temperature. The results show that there is a limited improvement in the optimal solutions when nodes in the road network increase and p is low. When p is high the improvements are larger. The results also show that choice of the best network depends on p. The larger p the larger density of the network is needed.
Resumo:
This thesis consists of a summary and four self-contained papers. Paper [I] Following the 1987 report by The World Commission on Environment and Development, the genuine saving has come to play a key role in the context of sustainable development, and the World Bank regularly publishes numbers for genuine saving on a national basis. However, these numbers are typically calculated as if the tax system is non-distortionary. This paper presents an analogue to genuine saving in a second best economy, where the government raises revenue by means of distortionary taxation. We show how the social cost of public debt, which depends on the marginal excess burden, ought to be reflected in the genuine saving. We also illustrate by presenting calculations for Greece, Japan, Portugal, U.K., U.S. and OECD average, showing that the numbers published by the World Bank are likely to be biased and may even give incorrect information as to whether the economy is locally sustainable. Paper [II] This paper examines the relationships among per capita CO2 emissions, per capita GDP and international trade based on panel data spanning the period 1960-2008 for 150 countries. A distinction is also made between OECD and Non-OECD countries to capture the differences of this relationship between developed and developing economies. We apply panel unit root and cointegration tests, and estimate a panel error correction model. The results from the error correction model suggest that there are long-term relationships between the variables for the whole sample and for Non-OECD countries. Finally, Granger causality tests show that there is bi-directional short-term causality between per capita GDP and international trade for the whole sample and between per capita GDP and CO2 emissions for OECD countries. Paper [III] Fundamental questions in economics are why some regions are richer than others, why their growth rates differ, whether their growth rates tend to converge, and what key factors contribute to explain economic growth. This paper deals with the average income growth, net migration, and changes in unemployment rates at the municipal level in Sweden. The aim is to explore in depth the effects of possible underlying determinants with a particular focus on local policy variables. The analysis is based on a three-equation model. Our results show, among other things, that increases in the local public expenditure and income taxe rate have negative effects on subsequent income income growth. In addition, the results show conditional convergence, i.e. that the average income among the municipal residents tends to grow more rapidly in relatively poor local jurisdictions than in initially “richer” jurisdictions, conditional on the other explanatory variables. Paper [IV] This paper explores the relationship between income growth and income inequality using data at the municipal level in Sweden for the period 1992-2007. We estimate a fixed effects panel data growth model, where the within-municipality income inequality is one of the explanatory variables. Different inequality measures (Gini coefficient, top income shares, and measures of inequality in the lower and upper part of the income distribution) are examined. We find a positive and significant relationship between income growth and income inequality measured as the Gini coefficient and top income shares, respectively. In addition, while inequality in the upper part of the income distribution is positively associated with the income growth rate, inequality in the lower part of the income distribution seems to be negatively related to the income growth. Our findings also suggest that increased income inequality enhances growth more in municipalities with a high level of average income than in municipalities with a low level of average income.
Resumo:
The p-median problem is often used to locate P service facilities in a geographically distributed population. Important for the performance of such a model is the distance measure. Distance measure can vary if the accuracy of the road network varies. The rst aim in this study is to analyze how the optimal location solutions vary, using the p-median model, when the road network is alternated. It is hard to nd an exact optimal solution for p-median problems. Therefore, in this study two heuristic solutions are applied, simulating annealing and a classic heuristic. The secondary aim is to compare the optimal location solutions using dierent algorithms for large p-median problem. The investigation is conducted by the means of a case study in a rural region with an asymmetrically distributed population, Dalecarlia. The study shows that the use of more accurate road networks gives better solutions for optimal location, regardless what algorithm that is used and regardless how many service facilities that is optimized for. It is also shown that the simulated annealing algorithm not just is much faster than the classic heuristic used here, but also in most cases gives better location solutions.
Resumo:
Transportation is seen as one of the major sources of CO2 pollutants nowadays. The impact of increased transport in retailing should not be underestimated. Most previous studies have focused on transportation and underlying trips, in general, while very few studies have addressed the specific affects that, for instance, intra-city shopping trips generate. Furthermore, most of the existing methods used to estimate emission are based on macro-data designed to generate national or regional inventory projections. There is a lack of studies using micro-data based methods that are able to distinguish between driver behaviour and the locational effects induced by shopping trips, which is an important precondition for energy efficient urban planning. The aim of this study is to implement a micro-data method to estimate and compare CO2 emission induced by intra-urban car travelling to a retail destination of durable goods (DG), and non-durable goods (NDG). We estimate the emissions from aspects of travel behaviour and store location. The study is conducted by means of a case study in the city of Borlänge, where GPS tracking data on intra-urban car travel is collected from 250 households. We find that a behavioural change during a trip towards a CO2 optimal travelling by car has the potential to decrease emission to 36% (DG), and to 25% (NDG) of the emissions induced by car-travelling shopping trips today. There is also a potential of reducing CO2 emissions induced by intra-urban shopping trips due to poor location by 54%, and if the consumer selected the closest of 8 existing stores, the CO2 emissions would be reduced by 37% of the current emission induced by NDG shopping trips.