986 resultados para 4-LEVEL SYSTEMS
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
All social surveys suffer from different types of errors, of which one of the most studied is non-response bias. Non-response bias is a systematic error that occurs because individuals differ in their accessibility and propensity to participate in a survey according to their own characteristics as well as those from the survey itself. The extent of the problem heavily depends on the correlation between response mechanisms and key survey variables. However, non-response bias is difficult to measure or to correct for due to the lack of relevant data about the whole target population or sample. In this paper, non-response follow-up surveys are considered as a possible source of information about non-respondents. Non-response follow-ups, however, suffer from two methodological issues: they themselves operate through a response mechanism that can cause potential non-response bias, and they pose a problem of comparability of measure, mostly because the survey design differs between main survey and non-response follow-up. In order to detect possible bias, the survey variables included in non-response surveys have to be related to the mechanism of participation, but not be sensitive to measurement effects due to the different designs. Based on accumulated experience of four similar non-response follow-ups, we studied the survey variables that fulfill these conditions. We differentiated socio-demographic variables that are measurement-invariant but have a lower correlation with non-response and variables that measure attitudes, such as trust, social participation, or integration in the public sphere, which are more sensitive to measurement effects but potentially more appropriate to account for the non-response mechanism. Our results show that education level, work status, and living alone, as well as political interest, satisfaction with democracy, and trust in institutions are pertinent variables to include in non-response follow-ups of general social surveys. - See more at: https://ojs.ub.uni-konstanz.de/srm/article/view/6138#sthash.u87EeaNG.dpuf
Resumo:
Internationalization and the following rapid growth have created the need to concentrate the IT systems of many small-to-medium-sized production companies. Enterprise Resource Planning systems are a common solution for such companies. Deployment of these ERP systems consists of many steps, one of which is the implementation of the same shared system at all international subsidiaries. This is also one of the most important steps in the internationalization strategy of the company from the IT point of view. The mechanical process of creating the required connections for the off-shore sites is the easiest and most well-documented step along the way, but the actual value of the system, once operational, is perceived in its operational reliability. The operational reliability of an ERP system is a combination of many factors. These factors vary from hardware- and connectivity-related issues to administrative tasks and communication between decentralized administrative units and sites. To accurately analyze the operational reliability of such system, one must take into consideration the full functionality of the system. This includes not only the mechanical and systematic processes but also the users and their administration. All operational reliability in an international environment relies heavily on hardware and telecommunication adequacy so it is imperative to have resources dimensioned with regard to planned usage. Still with poorly maintained communication/administration schemes no amount of bandwidth or memory will be enough to maintain a productive level of reliability. This thesis work analyzes the implementation of a shared ERP system to an international subsidiary of a Finnish production company. The system is Microsoft Dynamics Ax, currently being introduced to a Slovakian facility, a subsidiary of Peikko Finland Oy. The primary task is to create a feasible base of analysis against which the operational reliability of the system can be evaluated precisely. With a solid analysis the aim is to give recommendations on how future implementations are to be managed.
Resumo:
This thesis consists of four articles and an introductory section. The main research questions in all the articles are about proportionality and party success in Europe, at European, national or district levels. Proportionality in this thesis denotes the proximity of seat shares parties receive compared to their respective vote shares, after the electoral system’s allocation process. This proportionality can be measured through numerous indices that illustrate either the overall proportionality of an electoral system or a particular election. The correspondence of a single party’s seat shares to its vote shares can also be measured. The overall proportionality is essential in three of the articles (1, 2 and 4), where the system’s performance is studied by means of plots. In article 3, minority party success is measured by advantage-ratios that reveal single party’s winnings or losses in the votes to seat allocation process. The first article asks how proportional are the European parliamentary (EP) electoral systems, how do they compare with results gained from earlier studies and how do the EP electoral systems treat different sized parties. The reasons for different outcomes are looked for in explanations given by traditional electoral studies i.e. electoral system variables. The countries studied (EU15) apply electoral systems that vary in many important aspects, even though a certain amount of uniformity has been aspired to for decades. Since the electoral systems of the EP elections closely resemble the national elections, the same kinds of profiles emerge as in the national elections. The electoral systems indeed treat the parties differentially and six different profile types can be found. The counting method seems to somewhat determine the profile group, but the strongest variables determining the shape of a countries’ profile appears to be the average district magnitude and number of seats allocated to each country. The second article also focuses on overall proportionality performance of an electoral system, but here the focus is on the impact of electoral system changes. I have developed a new method of visualizing some previously used indices and some new indices for this purpose. The aim is to draw a comparable picture of these electoral systems’ changes and their effects. The cases, which illustrate this method, are four elections systems, where a change has occurred in one of the system variables, while the rest remained unchanged. The studied cases include the French, Greek and British European parliamentary systems and the Swedish national parliamentary system. The changed variables are electoral type (plurality changed to PR in the UK), magnitude (France splitting the nationwide district into eight smaller districts), legal threshold (Greece introducing a three percent threshold) and counting method (d’Hondt was changed to modified Sainte-Laguë in Sweden). The radar plots from elections after and before the changes are drawn for all country cases. When quantifying the change, the change in the plots area that is created has also been calculated. Using these radar plots we can observe that the change in electoral system type, magnitude, and also to some extent legal threshold had an effect on overall proportionality and accessibility for small parties, while the change between the two highest averages counting method had none. The third article studies the success minority parties have had in nine electoral systems in European heterogeneous countries. This article aims to add more motivation as to why we should care how different sized parties are treated by the electoral systems. Since many of the parties that aspire to represent minorities in European countries are small, the possibilities for small parties are highlighted. The theory of consociational (or power-sharing) democracy suggests that, in heterogeneous societies, a proportional electoral system will provide the fairest treatment of minority parties. The OSCE Lund Recommendations propose a number of electoral system features, which would improve minority representation. In this article some party variables, namely the unity of the minority parties and the geographical concentration of the minorities were included among possible explanations. The conclusions are that the central points affecting minority success were indeed these non-electoral system variables rather than the electoral system itself. Moreover, the size of the party was a major factor governing success in all the systems investigated; large parties benefited in all the studied electoral systems. In the fourth article the proportionality profiles are again applied, but this time to district level results in Finnish parliamentary elections. The level of proportionality distortion is also studied by way of indices. The average magnitudes during the studied periodrange from 7.5 to 26.2 in the Finnish electoral districts and this opens up unequal opportunities for parties in different districts and affects the shape of the profiles. The intra-country case allows the focus to be placed on the effect of district magnitude, since all other electoral systems are kept constant in an intra-country study. The time span in the study is from 1962 to 2007, i.e. the time that the districts have largely been the same geographically. The plots and indices tell the same story, district magnitude and electoral alliances matter. The district magnitude is connected to the overall proportionality of the electoral districts according to both indices, and the profiles are, as expected, also closer to perfect proportionality in large districts. Alliances have helped some small parties to gain a much higher seat share than their respective vote share and these successes affect some of the profiles. The profiles also show a consistent pattern of benefits for the small parties who ally with the larger parties.
Resumo:
La présente recherche traite des défis posés à l'action publique territoriale par la transition énergétique, transition désormais érigée au rang de priorité par les autorités françaises et suisses, comme plus globalement européennes. Elle prend pour cela appui sur une analyse des démarches de planification énergétique territoriale menées entre 2007 et 2014 sur le territoire franco-valdo-genevois (agglomération du « Grand-Genève »). Considérées comme des laboratoires d'expérimentation de la territorialisation des politiques énergétiques, ces démarches sont ici examinées selon une perspective institutionnaliste et pragmatiste visant à mettre lumière les éléments qui interviennent dans la délimitation du champ des possibles en matière d'action publique énergétique et territoriale. Ce positionnement découle des évolutions observées sur le territoire franco-valdo-genevois durant la période d'étude (chapitre 1). Il découle plus précisément du constat de récurrence de certains points de blocage rencontrés aussi bien dans les démarches de planification énergétique elles-mêmes que dans les travaux méthodologiques qui ont pu être réalisés parallèlement à ces démarches, dans le but d'en affiner les outils techniques et organisationnels de mise en oeuvre. Ainsi, le point de départ de la présente recherche est le constat selon lequel on peine tout autant à construire des solutions énergétiques appropriables et réalisables par les acteurs des territoires concernés qu'à reconfigurer les outils de production de ces solutions. De ce constat découle l'intérêt porté aux cadres institutionnels qui régissent ces planifications énergétiques territoriales. Définis comme l'ensemble des repères - formels et informels - qui rendent possibles en même temps qu'ils contraignent les interactions territorialisées entre les acteurs, ces cadres institutionnels sont placés au coeur de la grille de (re)lecture des expériences de planification énergétique territoriale établie au chapitre 2 de la thèse. En référence aux concepts institutionnalistes et pragmatistes sur lesquels elle prend appui, cette grille conduit à appréhender ces expériences comme autant d'enquêtes contribuant, à travers le travail de mobilisation et construction de représentations territoriales auquel elles donnent lieu, à l'équipement sociocognitif d'un champ d'intervention territorial spécifique. Partant de l'hypothèse selon laquelle les potentialités comme les limites associées à l'équipement sociocognitif de ce champ orientent les possibilités d'action collective, la réflexion consiste en une application de cette grille à une trentaine d'expériences de planification énergétique territoriale. Cette application s'effectue en deux temps, correspondant à deux niveaux de lecture de ces démarches. Le premier porte sur les dispositifs organisationnels et les modalités d'interactions entre les cultures d'action qu'elles réunissent (chapitre 3). Le second se concentre davantage sur les supports cognitifs (représentations territoriales) autour desquels se structurent ces interactions (chapitre 4). Présentés dans le dernier chapitre de la thèse (chapitre 5), les enseignements tirés de ce travail de réexamen des démarches franco-valdo-genevoises de planification énergétique territoriale sont de deux ordres. Ils portent d'abord sur les caractéristiques des cadres institutionnels existants, la manière dont ils orientent ces démarches et délimitent les évolutions possibles dans les modes d'action collective et plus particulièrement d'action publique qui y sont associés. Mais ils portent aussi sur les potentiels de changement associés à ces démarches, et sur les pistes envisageables pour mieux valoriser es potentiels, dont l'activation passe par des évolutions profondes des systèmes institutionnels en place. -- In France as in Switzerland, local authorities stand out as leading players of energy transition, a transition that requires an important renewal of public intervention instruments. It is the stakes and the conditions of such a renewal that the present work aims to examine, based on the experiments of territorial energy planning led on the franco-valdo-genevan cross-border territory. Conceived as initiatives of relocation of the energy supply system, these energy planning initiatives are examined through an institutionalist and pragmatic « reading template ». This « reading template » consists of seeing these energy planning initiatives as pragmatist inquiries aiming, through a collective work of cognitive equipment of the territorial franco-valdo-genevan field of intervention, at the reconstruction of the means of coordination between people about their material, organizational and political territory. It opens towards a double reading of the energy planning initiatives. The first one concentrates on the organizational dimension of these inquiries - i.e. on the cultures of action which they gather and the modalities of interaction between them - whereas the second focuses on the cognitive substance which represents the medium of the interactions. This double reading provides insights at various levels. The first one concerns the (cognitive) territorial field of intervention that these energy-planning experiments contribute to draw. A field which, although better and better characterized in its technical dimensions, remains at the same time limited and " deformed " so that it values more the fossil energy systems, from which we want to release ourselves, than the renewable ones, which we would like to replace them with. The second level of teaching concerns the processes of production of territorial knowledge (PPTK) which presides over the demarcation and « equipment » of the territorial field of intervention. Examined through the institutional norms and the culture of action at stake in them, this PPTK turns out to create a sociocognitive "cross-border" area, the kind of area that could shelter the desired reconfigurations...on the condition that they are beforehand correctly "equipped", in cognitive and also in organizational terms. The determining factor for the quality of this equipment is concentrated in the third category of teaching. Starting with the opportunities created by these energy planning experiments concerning the renewal of public intervention instruments, these elements also allow us to take a new look at the urban area project under construction in this cross-border territory, a project th t shows itself closely linked to the energy experiments through a common challenge of territorialisation.
Resumo:
Research question: International and national sport federations as well as their member organisations are key actors within the sport system and have a wide range of relationships outside the sport system (e.g. with the state, sponsors, and the media). They are currently facing major challenges such as growing competition in top-level sports, democratisation of sports with 'sports for all' and sports as the answer to social problems. In this context, professionalising sport organisations seems to be an appropriate strategy to face these challenges and current problems. We define the professionalisation of sport organisations as an organisational process of transformation leading towards organisational rationalisation, efficiency and business-like management. This has led to a profound organisational change, particularly within sport federations, characterised by the strengthening of institutional management (managerialism) and the implementation of efficiency-based management instruments and paid staff. Research methods: The goal of this article is to review the current international literature and establish a global understanding of and theoretical framework for analysing why and how sport organisations professionalise and what consequences this may have. Results and findings: Our multi-level approach based on the social theory of action integrates the current concepts for analysing professionalisation in sport federations. We specify the framework for the following research perspectives: (1) forms, (2) causes and (3) consequences, and discuss the reciprocal relations between sport federations and their member organisations in this context. Implications: Finally, we work out a research agenda and derive general methodological consequences for the investigation of professionalisation processes in sport organisations.
Resumo:
Adolescence is an important time for acquiring high peak bone mass. Physical activity is known to be beneficial to bone development. The effect of estrogen-progestin contraceptives (EPC) is still controversial. Altogether 142 (52 gymnasts, 46 runners, and 42 controls) adolescent women participated in this study, which is based on two 7-year (n =142), one 6-year (n =140) and one 4-year (n =122) follow-ups. Information on physical activity, menstrual history, sexual maturation, nutrition, living habits and health status was obtained through questionnaires and interviews. The bone mineral density (BMD) and content (BMC) of lumbar spine (LS) and femoral neck (FN) were measured by dual- energy X-ray absoptiometry. Calcaneal sonographic measurements were also made. The physical activity of the athletes participating in this study decreased after 3-year follow-up. High-impact exercise was beneficial to bones. LS and FN BMC was higher in gymnasts than in controls during the follow-up. Reduction in physical activity had negative effects on bone mass. LS and FN BMC increased less in the group having reduced their physical activity more than 50%, compared with those continuing at the previous level (1.69 g, p=0.021; 0.14 g, p=0.015, respectively). The amount of physical activity was the only significant parameter accounting for the calcaneal sonography measurements at 6-year follow-up (11.3%) and reduced activity level was associated with lower sonographic values. Long-term low-dose EPC use seemed to prevent normal bone mass acquisition. There was a significant trend towards a smaller increase in LS and FN BMC among long-term EPC users. In conclusion, this study confirms that high-impact exercise is beneficial to bones and that the benefits are partly maintained even after a clear reduction in training level at least for 4 years. Continued exercise is needed to retain all acquired benefits. The bone mass gained and maintained can possibly be maximized in adolescence by implementing high-impact exercise for youngsters. The peak bone mass of the young women participating in the study may be reached before the age of 20. Use of low-dose EPCs seems to suppress normal bone mass acquisition.
Resumo:
The primary objective is to identify the critical factors that have a natural impact on the performance measurement system. It is important to make correct decisions related to measurement systems, which are based on the complex business environment. The performance measurement system is combined with a very complex non-linear factor. The Six Sigma methodology is seen as one potential approach at every organisational level. It will be linked to the performance and financial measurement as well as to the analytical thinking on which the viewpoint of management depends. The complex systems are connected to the customer relationship study. As the primary throughput can be seen in a new well-defined performance measurement structure that will also be facilitated as will an analytical multifactor system. These critical factors should also be seen as a business innovation opportunity at the same time. This master's thesis has been divided into two different theoretical parts. The empirical part consists of both action-oriented and constructive research approaches with an empirical case study. The secondary objective is to seek a competitive advantage factor with a new analytical tool and the Six Sigma thinking. Process and product capabilities will be linked to the contribution of complex system. These critical barriers will be identified by the performance measuring system. The secondary throughput can be recognised as the product and the process cost efficiencies which throughputs are achieved with an advantage of management. The performance measurement potential is related to the different productivity analysis. Productivity can be seen as one essential part of the competitive advantage factor.
Resumo:
BACKGROUND: Rivaroxaban has become an alternative to vitamin-K antagonists (VKA) for stroke prevention in non-valvular atrial fibrillation (AF) patients due to its favourable risk-benefit profile in the restrictive setting of a large randomized trial. However in the primary care setting, physician's motivation to begin with rivaroxaban, treatment satisfaction and the clinical event rate after the initiation of rivaroxaban are not known. METHODS: Prospective data collection by 115 primary care physicians in Switzerland on consecutive nonvalvular AF patients with newly established rivaroxaban anticoagulation with 3-month follow-up. RESULTS: We enrolled 537 patients (73±11years, 57% men) with mean CHADS2 and HAS-BLED-scores of 2.2±1.3 and 2.4±1.1, respectively: 301(56%) were switched from VKA to rivaroxaban (STR-group) and 236(44%) were VKA-naïve (VN-group). Absence of routine coagulation monitoring (68%) and fixed-dose once-daily treatment (58%) were the most frequent criteria for physicians to initiate rivaroxaban. In the STR-group, patient's satisfaction increased from 3.6±1.4 under VKA to 5.5±0.8 points (P<0.001), and overall physician satisfaction from 3.9±1.3 to 5.4±0.9 points (P<0.001) at 3months of rivaroxaban therapy (score from 1 to 6 with higher scores indicating greater satisfaction). In the VN-group, both patient's (5.4±0.9) and physician's satisfaction (5.5±0.7) at follow-up were comparable to the STR-group. During follow-up, 1(0.19%; 95%CI, 0.01-1.03%) ischemic stroke, 2(0.37%; 95%CI, 0.05-1.34%) major non-fatal bleeding and 11(2.05%; 95%CI, 1.03-3.64%) minor bleeding complications occurred. Rivaroxaban was stopped in 30(5.6%) patients, with side effects being the most frequent reason. CONCLUSION: Initiation of rivaroxaban for patients with nonvalvular AF by primary care physicians was associated with a low clinical event rate and with high overall patient's and physician's satisfaction.
Resumo:
One of the global targets for non-communicable diseases is to halt, by 2025, the rise in the age-standardised adult prevalence of diabetes at its 2010 levels. We aimed to estimate worldwide trends in diabetes, how likely it is for countries to achieve the global target, and how changes in prevalence, together with population growth and ageing, are affecting the number of adults with diabetes. We pooled data from population-based studies that had collected data on diabetes through measurement of its biomarkers. We used a Bayesian hierarchical model to estimate trends in diabetes prevalence-defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs-in 200 countries and territories in 21 regions, by sex and from 1980 to 2014. We also calculated the posterior probability of meeting the global diabetes target if post-2000 trends continue. We used data from 751 studies including 4,372,000 adults from 146 of the 200 countries we make estimates for. Global age-standardised diabetes prevalence increased from 4.3% (95% credible interval 2.4-7.0) in 1980 to 9.0% (7.2-11.1) in 2014 in men, and from 5.0% (2.9-7.9) to 7.9% (6.4-9.7) in women. The number of adults with diabetes in the world increased from 108 million in 1980 to 422 million in 2014 (28.5% due to the rise in prevalence, 39.7% due to population growth and ageing, and 31.8% due to interaction of these two factors). Age-standardised adult diabetes prevalence in 2014 was lowest in northwestern Europe, and highest in Polynesia and Micronesia, at nearly 25%, followed by Melanesia and the Middle East and north Africa. Between 1980 and 2014 there was little change in age-standardised diabetes prevalence in adult women in continental western Europe, although crude prevalence rose because of ageing of the population. By contrast, age-standardised adult prevalence rose by 15 percentage points in men and women in Polynesia and Micronesia. In 2014, American Samoa had the highest national prevalence of diabetes (>30% in both sexes), with age-standardised adult prevalence also higher than 25% in some other islands in Polynesia and Micronesia. If post-2000 trends continue, the probability of meeting the global target of halting the rise in the prevalence of diabetes by 2025 at the 2010 level worldwide is lower than 1% for men and is 1% for women. Only nine countries for men and 29 countries for women, mostly in western Europe, have a 50% or higher probability of meeting the global target. Since 1980, age-standardised diabetes prevalence in adults has increased, or at best remained unchanged, in every country. Together with population growth and ageing, this rise has led to a near quadrupling of the number of adults with diabetes worldwide. The burden of diabetes, both in terms of prevalence and number of adults affected, has increased faster in low-income and middle-income countries than in high-income countries. Wellcome Trust.
Resumo:
This article introduces a simplified model for the theoretical study of the physical adsorption process of gaseous He on the planes (100) and (111) of the solid Xe matrix, whose crystalline structure is face centered cubic (fcc). The Ab initio calculations were carried out at the MP2 level of theory employing basis sets obtained through the Generator Coordinate Method, where the core electrons were represented by a pseudopotential. The calculated adsorption energies for the (100) and (111) faces are 5,39 and 4,18 kJ/mol, respectively. This simplified model is expected to be suitable for treating complex systems of applied interest.
Resumo:
This thesis deals with a hardware accelerated Java virtual machine, named REALJava. The REALJava virtual machine is targeted for resource constrained embedded systems. The goal is to attain increased computational performance with reduced power consumption. While these objectives are often seen as trade-offs, in this context both of them can be attained simultaneously by using dedicated hardware. The target level of the computational performance of the REALJava virtual machine is initially set to be as fast as the currently available full custom ASIC Java processors. As a secondary goal all of the components of the virtual machine are designed so that the resulting system can be scaled to support multiple co-processor cores. The virtual machine is designed using the hardware/software co-design paradigm. The partitioning between the two domains is flexible, allowing customizations to the resulting system, for instance the floating point support can be omitted from the hardware in order to decrease the size of the co-processor core. The communication between the hardware and the software domains is encapsulated into modules. This allows the REALJava virtual machine to be easily integrated into any system, simply by redesigning the communication modules. Besides the virtual machine and the related co-processor architecture, several performance enhancing techniques are presented. These include techniques related to instruction folding, stack handling, method invocation, constant loading and control in time domain. The REALJava virtual machine is prototyped using three different FPGA platforms. The original pipeline structure is modified to suit the FPGA environment. The performance of the resulting Java virtual machine is evaluated against existing Java solutions in the embedded systems field. The results show that the goals are attained, both in terms of computational performance and power consumption. Especially the computational performance is evaluated thoroughly, and the results show that the REALJava is more than twice as fast as the fastest full custom ASIC Java processor. In addition to standard Java virtual machine benchmarks, several new Java applications are designed to both verify the results and broaden the spectrum of the tests.
Resumo:
The importance of the regional level in research has risen in the last few decades and a vast literature in the fields of, for instance, evolutionary and institutional economics, network theories, innovations and learning systems, as well as sociology, has focused on regional level questions. Recently the policy makers and regional actors have also began to pay increasing attention to the knowledge economy and its needs, in general, and the connectivity and support structures of regional clusters in particular. Nowadays knowledge is generally considered as the most important source of competitive advantage, but even the most specialised forms of knowledge are becoming a short-lived resource for example due to the accelerating pace of technological change. This emphasizes the need of foresight activities in national, regional and organizational levels and the integration of foresight and innovation activities. In regional setting this development sets great challenges especially in those regions having no university and thus usually very limited resources for research activities. Also the research problem of this dissertation is related to the need to better incorporate the information produced by foresight process to facilitate and to be used in regional practice-based innovation processes. This dissertation is a constructive case study the case being Lahti region and a network facilitating innovation policy adopted in that region. Dissertation consists of a summary and five articles and during the research process a construct or a conceptual model for solving this real life problem has been developed. It is also being implemented as part of the network facilitating innovation policy in the Lahti region.
Resumo:
A rotating machine usually consists of a rotor and bearings that supports it. The nonidealities in these components may excite vibration of the rotating system. The uncontrolled vibrations may lead to excessive wearing of the components of the rotating machine or reduce the process quality. Vibrations may be harmful even when amplitudes are seemingly low, as is usually the case in superharmonic vibration that takes place below the first critical speed of the rotating machine. Superharmonic vibration is excited when the rotational velocity of the machine is a fraction of the natural frequency of the system. In such a situation, a part of the machine’s rotational energy is transformed into vibration energy. The amount of vibration energy should be minimised in the design of rotating machines. The superharmonic vibration phenomena can be studied by analysing the coupled rotor-bearing system employing a multibody simulation approach. This research is focused on the modelling of hydrodynamic journal bearings and rotorbearing systems supported by journal bearings. In particular, the non-idealities affecting the rotor-bearing system and their effect on the superharmonic vibration of the rotating system are analysed. A comparison of computationally efficient journal bearing models is carried out in order to validate one model for further development. The selected bearing model is improved in order to take the waviness of the shaft journal into account. The improved model is implemented and analyzed in a multibody simulation code. A rotor-bearing system that consists of a flexible tube roll, two journal bearings and a supporting structure is analysed employing the multibody simulation technique. The modelled non-idealities are the shell thickness variation in the tube roll and the waviness of the shaft journal in the bearing assembly. Both modelled non-idealities may cause subharmonic resonance in the system. In multibody simulation, the coupled effect of the non-idealities can be captured in the analysis. Additionally one non-ideality is presented that does not excite the vibrations itself but affects the response of the rotorbearing system, namely the waviness of the bearing bushing which is the non-rotating part of the bearing system. The modelled system is verified with measurements performed on a test rig. In the measurements the waviness of bearing bushing was not measured and therefore it’s affect on the response was not verified. In conclusion, the selected modelling approach is an appropriate method when analysing the response of the rotor-bearing system. When comparing the simulated results to the measured ones, the overall agreement between the results is concluded to be good.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4