951 resultados para Worst-case execution-time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Debris flows are among the most dangerous processes in mountainous areas due to their rapid rate of movement and long runout zone. Sudden and rather unexpected impacts produce not only damages to buildings and infrastructure but also threaten human lives. Medium- to regional-scale susceptibility analyses allow the identification of the most endangered areas and suggest where further detailed studies have to be carried out. Since data availability for larger regions is mostly the key limiting factor, empirical models with low data requirements are suitable for first overviews. In this study a susceptibility analysis was carried out for the Barcelonnette Basin, situated in the southern French Alps. By means of a methodology based on empirical rules for source identification and the empirical angle of reach concept for the 2-D runout computation, a worst-case scenario was first modelled. In a second step, scenarios for high, medium and low frequency events were developed. A comparison with the footprints of a few mapped events indicates reasonable results but suggests a high dependency on the quality of the digital elevation model. This fact emphasises the need for a careful interpretation of the results while remaining conscious of the inherent assumptions of the model used and quality of the input data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the calcium-induced aggregation of phosphatidylserine liposomes is probed by means of the analysis of the kinetics of such process as well as the aggregate morphology. This novel characterization of liposome aggregation involves the use of static and dynamic light-scattering techniques to obtain kinetic exponents and fractal dimensions. For salt concentrations larger than 5 mM, a diffusion-limited aggregation regime is observed and the Brownian kernel properly describes the time evolution of the diffusion coefficient. For slow kinetics, a slightly modified multiple contact kernel is required. In any case, a time evolution model based on the numerical resolution of Smoluchowski's equation is proposed in order to establish a theoretical description for the aggregating system. Such a model provides an alternative procedure to determine the dimerization constant, which might supply valuable information about interaction mechanisms between phospholipid vesicles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study was precipitated by several failures of flexible pipe culverts due to apparent inlet floatation. A survey of Iowa County Engineers revealed 31 culvert failures on pipes greater than 72" diameter in eight Iowa counties within the past five years. No special hydrologic, topography, and geotechnical environments appeared to be more susceptible to failure. However, most failures seemed to be on pipes flowing in inlet control. Geographically, most of the failures were in the southern and western sections of Iowa. The forces acting on a culvert pipe are quantified. A worst case scenario, where the pipe is completely plugged, is evaluated to determine the magnitude of forces that must be resisted by a tie down or headwall. Concrete headwalls or slope collars are recommended for most pipes over 4 feet in diameter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Perushyväksymistestaus on oleellinen osa S60 alustan julkaisukandidaatin maturiteetin seurannassa. Perushyväksymistestausta tehdään myös ohjelmiston julkistamiskelpoisuuden varmistamiseksi. Testaustulokset halutaan aina mahdollisimman nopeasti. Lisäksi testaustiimin työmäärä on hiljalleen kasvanut, koska projekteja onenemmän ja korjauksia sisältäviä ja räätälöityjä settejä testataan enemmän. Tässä diplomityössä tutkitaan lyhentäisikö testisetin osan automatisointi testien ajoaikaa ja helpottaisiko se testaajien työtaakkaa. Tarkastelu toteutetaan automatisoimalla osa testisetistä ja kokemuksia esitellään tässä lopputyössä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel image classification scheme for benthic coral reef images that can be applied to both single image and composite mosaic datasets. The proposed method can be configured to the characteristics (e.g., the size of the dataset, number of classes, resolution of the samples, color information availability, class types, etc.) of individual datasets. The proposed method uses completed local binary pattern (CLBP), grey level co-occurrence matrix (GLCM), Gabor filter response, and opponent angle and hue channel color histograms as feature descriptors. For classification, either k-nearest neighbor (KNN), neural network (NN), support vector machine (SVM) or probability density weighted mean distance (PDWMD) is used. The combination of features and classifiers that attains the best results is presented together with the guidelines for selection. The accuracy and efficiency of our proposed method are compared with other state-of-the-art techniques using three benthic and three texture datasets. The proposed method achieves the highest overall classification accuracy of any of the tested methods and has moderate execution time. Finally, the proposed classification scheme is applied to a large-scale image mosaic of the Red Sea to create a completely classified thematic map of the reef benthos

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results: In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions: The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Venous cannula orifice obstruction is an underestimated problem during augmented cardiopulmonary bypass (CPB), which can potentially be reduced with redesigned, virtually wall-less cannula designs versus traditional percutaneous control venous cannulas. A bench model, allowing for simulation of the vena cava with various affluent orifices, venous collapse and a worst case scenario with regard to cannula position, was developed. Flow (Q) was measured sequentially for right atrial + hepatic + renal + iliac drainage scenarios, using a centrifugal pump and an experimental bench set-up (afterload 60 mmHg). At 1500, 2000 and 2500 RPM and atrial position, the Q values were 3.4, 6.03 and 8.01 versus 0.77*, 0.43* and 0.58* l/min: p<0.05* for wall-less and the Biomedicus(®) cannula, respectively. The corresponding pressure values were -15.18, -31.62 and -74.53 versus -46.0*, -119.94* and -228.13* mmHg. At the hepatic position, the Q values were 3.34, 6.67 and 9.26 versus 2.3*, 0.42* and 0.18* l/min; and the pressure values were -10.32, -20.25 and -42.83 versus -23.35*, -119.09* and -239.38* mmHg. At the renal position, the Q values were 3.43, 6.56 and 8.64 versus 2.48*, 0.41* and 0.22* l/min and the pressure values were -9.64, -20.98 and -63.41 versus -20.87 -127.68* and -239* mmHg, respectively. At the iliac position, the Q values were 3.43, 6.01 and 9.25 versus 1.62*, 0.55* and 0.58* l/min; the pressure values were -9.36, -33.57 and -44.18 versus -30.6*, -120.27* and -228* mmHg, respectivly. Our experimental evaluation demonstrates that the redesigned, virtually wall-less cannulas, allowing for direct venous drainage at practically all intra-venous orifices, outperform the commercially available control cannula, with superior flow at reduced suction levels for all scenarios tested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Random problem distributions have played a key role in the study and design of algorithms for constraint satisfaction and Boolean satisfiability, as well as in ourunderstanding of problem hardness, beyond standard worst-case complexity. We consider random problem distributions from a highly structured problem domain that generalizes the Quasigroup Completion problem (QCP) and Quasigroup with Holes (QWH), a widely used domain that captures the structure underlying a range of real-world applications. Our problem domain is also a generalization of the well-known Sudoku puz- zle: we consider Sudoku instances of arbitrary order, with the additional generalization that the block regions can have rectangular shape, in addition to the standard square shape. We evaluate the computational hardness of Generalized Sudoku instances, for different parameter settings. Our experimental hardness results show that we can generate instances that are considerably harder than QCP/QWH instances of the same size. More interestingly, we show the impact of different balancing strategies on problem hardness. We also provide insights into backbone variables in Generalized Sudoku instances and how they correlate to problem hardness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The strength properties of paper coating layer are very important in converting and printing operations. Too great or low strength of the coating can affect several problems in printing. One of the problems caused by the strength of coating is the cracking at the fold. After printing the paper is folded to final form and the pages are stapled together. In folding the paper coating can crack causing aesthetic damage over printed image or in the worst case the centre sheet can fall off in stapling. When folding the paper other side undergoes tensile stresses and the other side compressive stresses. If the difference between these stresses is too high, the coating can crack on the folding. To better predict and prevent cracking at the fold it is good to know the strength properties of coating layer. It has measured earlier the tensile strength of coating layer but not the compressive strength. In this study it was tried to find some way to measure the compressive strength of the coating layer and investigate how different coatings behave in compression. It was used the short span crush test, which is used to measure the in-plane compressive strength of paperboards, to measure the compressive strength of the coating layer. In this method the free span of the specimen is very small which prevent buckling. It was measured the compressive strength of free coating films as well as coated paper. It was also measured the tensile strength and the Bendtsen air permeance of the coating film. The results showed that the shape of pigment has a great effect to the strength of coating. Platy pigment gave much better strength than round or needle-like pigment. On the other hand calcined kaolin, which is also platy but the particles are aggregated, decreased the strength substantially. The difference in the strength can be explained with packing of the particles which is affecting to the porosity and thus to the strength. The platy kaolin packs up much better than others and creates less porous structure. The results also showed that the binder properties have a great effect to the compressive strength of coating layer. The amount of latex and the glass transition temperature, Tg, affect to the strength. As the amount of latex is increasing, the strength of coating is increasing also. Larger amount of latex is binding the pigment particles better together and decreasing the porosity. Compressive strength was increasing when the Tg was increasing because the hard latex gives a stiffer and less elastic film than soft latex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El propósito de este trabajo es optimizar el sistema de gestión de workflows COMPSs caracterizando el comportamiento de diferentes dispositivos de memoria a nivel de consumo energético y tiempo de ejecución. Para llevar a cabo este propósito, se ha implementado un servicio de caché para COMPSs para conseguir que sea consciente de la jerarquía de memoria y se han realizado múltiples experimentos para caracterizar los dispositivos de memoria y las mejoras en el rendimiento.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In der Leistungselektronik spielt die Kenntnis des Wärmevrhaltens einer Platine eine sehr große Rolle. Die immer größeren Leistungsdichten unterstreichen die Wichtigkeit des Kenntnisses des Wärmeverhaltens. In der Platine funktionieren die Leistungskomponenten und Kupferlagen die große Ströme tragen als Leistungsquellen. Das Isolationsmaterial zwischen den Kupferlagen limitiert die maximale Temperatur der Platine. Dieses bringt eine Grentzung für den maximalen Strom der durch die Platine geführt werden kann. In dieser Arbeit wurden die maximalen Stromdichten im Worst-Case-Szenario einer Platine untersucht. Dafür wurde eine Testplatine entworfen und für die Testplatine ein thermisches Modell konstruiert. Die Effekte von Kühlung wurden auch untersucht. Die Bestimmtheit des Modells wurde mit Messungen überprüft.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä pro gradu -tutkielmassa tarkastellaan henkilöstöresurssien organisointia suuren alusöljyvahingon torjuntatilanteessa. Tavoitteena on suunnitella Suomenlahden rannikon öljyntorjunnasta vastaavien viranomaisten käyttöön optimaalinen rekrytointistrategia pahimman todennäköisen alusöljyvahingon varalle. Tutkimuksessa selvitetään myös millainen työsopimus öljyntorjuntatyöntekijöiden kanssa voidaan sopia. Näiden lisäksi etsitään vastausta siihen,kuinka työvoima saadaan pidettyä. Tämän laadullisen tutkimuksen teoreettinen osuus toteutettiin kirjallisuuskatsauksena. Tutkimuksen empiirinen aineisto kerättiin haastattelemalla yhdeksää asiantuntijaa syksyn 2009 aikana. Haastattelut olivat muodoltaan puolistrukturoituja teemahaastatteluja. Tutkimustulosten mukaan merkittävin lisätyövoiman tarve ilmenee käsin tehtävässä rantapuhdistustyössä. Etenkin puhdistustyön pitkittyessä pelastusviranomaiset tarvitsevat avukseen ulkopuolista työvoimaa. Rekrytointi suoritetaan muutamien viikkojen kuluessa öljyvahingon aiheutumisen jälkeen. Alueellinen pelastuslaitos suorittaa rekrytoinnin käyttäen tehokkaita, laajan kohderyhmän tavoittavia rekrytointiviestinnän välineitä (esim. sanomalehdet, TV ja Internet). Työntekijöiden kanssa sovitaan määräaikainen, Kunnallista virkaja työehtosopimusta noudattava työsopimus. Tärkeimpinä puhdistustyöntekijöitä motivoivina tekijöinä nähdään työn merkitys yhteiskunnalle, selkeästi määritelty, saavutettavissa oleva tavoite sekä palaute tehdystä työstä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Centrifugal pumps are widely used in industrial and municipal applications, and they are an important end-use application of electric energy. However, in many cases centrifugal pumps operate with a significantly lower energy efficiency than they actually could, which typically has an increasing effect on the pump energy consumption and the resulting energy costs. Typical reasons for this are the incorrect dimensioning of the pumping system components and inefficiency of the applied pump control method. Besides the increase in energy costs, an inefficient operation may increase the risk of a pump failure and thereby the maintenance costs. In the worst case, a pump failure may lead to a process shutdown accruing additional costs. Nowadays, centrifugal pumps are often controlled by adjusting their rotational speed, which affects the resulting flow rate and output pressure of the pumped fluid. Typically, the speed control is realised with a frequency converter that allows the control of the rotational speed of an induction motor. Since a frequency converter can estimate the motor rotational speed and shaft torque without external measurement sensors on the motor shaft, it also allows the development and use of sensorless methods for the estimation of the pump operation. Still today, the monitoring of pump operation is based on additional measurements and visual check-ups, which may not be applicable to determine the energy efficiency of the pump operation. This doctoral thesis concentrates on the methods that allow the use of a frequency converter as a monitoring and analysis device for a centrifugal pump. Firstly, the determination of energy-efficiency- and reliability-based limits for the recommendable operating region of a variable-speed-driven centrifugal pump is discussed with a case study for the laboratory pumping system. Then, three model-based estimation methods for the pump operating location are studied, and their accuracy is determined by laboratory tests. In addition, a novel method to detect the occurrence of cavitation or flow recirculation in a centrifugal pump by a frequency converter is introduced. Its sensitivity compared with known cavitation detection methods is evaluated, and its applicability is verified by laboratory measurements for three different pumps and by using two different frequency converters. The main focus of this thesis is on the radial flow end-suction centrifugal pumps, but the studied methods can also be feasible with mixed and axial flow centrifugal pumps, if allowed by their characteristics.