69 resultados para level set method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fatigue life assessment of weldedstructures is commonly based on the nominal stress method, but more flexible and accurate methods have been introduced. In general, the assessment accuracy is improved as more localized information about the weld is incorporated. The structural hot spot stress method includes the influence of macro geometric effects and structural discontinuities on the design stress but excludes the local features of the weld. In this thesis, the limitations of the structural hot spot stress method are discussed and a modified structural stress method with improved accuracy is developed and verified for selected welded details. The fatigue life of structures in the as-welded state consists mainly of crack growth from pre-existing cracks or defects. Crack growth rate depends on crack geometry and the stress state on the crack face plane. This means that the stress level and shape of the stress distribution in the assumed crack path governs thetotal fatigue life. In many structural details the stress distribution is similar and adequate fatigue life estimates can be obtained just by adjusting the stress level based on a single stress value, i.e., the structural hot spot stress. There are, however, cases for which the structural stress approach is less appropriate because the stress distribution differs significantly from the more common cases. Plate edge attachments and plates on elastic foundations are some examples of structures with this type of stress distribution. The importance of fillet weld size and weld load variation on the stress distribution is another central topic in this thesis. Structural hot spot stress determination is generally based on a procedure that involves extrapolation of plate surface stresses. Other possibilities for determining the structural hot spot stress is to extrapolate stresses through the thickness at the weld toe or to use Dong's method which includes through-thickness extrapolation at some distance from the weld toe. Both of these latter methods are less sensitive to the FE mesh used. Structural stress based on surface extrapolation is sensitive to the extrapolation points selected and to the FE mesh used near these points. Rules for proper meshing, however, are well defined and not difficult to apply. To improve the accuracy of the traditional structural hot spot stress, a multi-linear stress distribution is introduced. The magnitude of the weld toe stress after linearization is dependent on the weld size, weld load and plate thickness. Simple equations have been derived by comparing assessment results based on the local linear stress distribution and LEFM based calculations. The proposed method is called the modified structural stress method (MSHS) since the structural hot spot stress (SHS) value is corrected using information on weld size andweld load. The correction procedure is verified using fatigue test results found in the literature. Also, a test case was conducted comparing the proposed method with other local fatigue assessment methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to introduce the emerging non-contacting spray coating process and compare it to the existing coating techniques. Particular emphasis was given to the details of the spraying process of paper coating colour and the base paper requirements set by the new coating method. Spraying technology itself is nothing new, but the atomisation process of paper coating colour is quite unknown to the paper industry. The differences between the rheology of painting and coating colours make it very difficult to utilise the existing information from spray painting research. Based on the trials, some basic conclusion can be made:The results of this study suggest that the Brookfield viscosity of spray coating colour should be as low as possible, presently a 50 mPas level is regarded as an optimum. For the paper quality and coater runnability, the solids level should be as high as possible. However, the graininess of coated paper surface and the nozzle wear limits the maximum solids level to 60 % at the moment. Most likelydue to the low solids and low viscosity of the coating colour the low shear Brookfield viscosity correlates very well with the paper and spray fan qualities. High shear viscosity is also important, but yet less significant than the low shear viscosity. Droplet size should be minimized and besides keeping the brrokfield viscosity low that can be helped by using a surfactant or dispersing agent in the coating colour formula. Increasing the spraying pressure in the nozzle can also reduce the droplet size. The small droplet size also improves the coating coverage, since there is hardly any levelling taking place after the impact with the base paper. Because of the lack of shear forces after the application, the pigment particles do not orientate along the paper surface. Therefore the study indicates that based on the present know-how, no quality improvements can be obtained by the use of platy type of pigments. The other disadvantage of them is the rapid deterioration of the nozzle lifetime. Further research in both coating colour rheology and nozzle design may change this in the future, but so far only round shape pigments, like typically calcium carbonate is, can be used with spray coating. The low water retention characteristics of spray coating, enhanced by the low solids and low viscosity, challenge the base paper absorption properties.Filler level has to be low not to increase the number of small pores, which have a great influence on the absorption properties of the base paper. Hydrophobic sizing reduces this absorption and prevents binder migration efficiently. High surface roughness and especially poor formation of the base paper deteriorate thespray coated paper properties. However, pre-calendering of the base paper does not contribute anything to the finished paper quality, at least at the coating colour solids level below 60 %. When targeting a standard offset LWC grade, spraycoating produces similar quality to film coating, but yet blade coating being on a slightly better level. However, because of the savings in both investment and production costs, spray coating may have an excellent future ahead. The porousnature of the spray coated surface offers an optimum substrate for the coldset printing industry to utilise the potential of high quality papers in their business.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the thesis is to structure and model the factors that contribute to and can be used in evaluating project success. The purpose of this thesis is to enhance the understanding of three research topics. The goal setting process, success evaluation and decision-making process are studied in the context of a project, business unitand its business environment. To achieve the objective three research questionsare posed. These are 1) how to set measurable project goals, 2) how to evaluateproject success and 3) how to affect project success with managerial decisions.The main theoretical contribution comes from deriving a synthesis of these research topics which have mostly been discussed apart from each other in prior research. The research strategy of the study has features from at least the constructive, nomothetical, and decision-oriented research approaches. This strategy guides the theoretical and empirical part of the study. Relevant concepts and a framework are composed on the basis of the prior research contributions within the problem area. A literature review is used to derive constructs of factors withinthe framework. They are related to project goal setting, success evaluation, and decision making. On the basis of this, the case study method is applied to complement the framework. The empirical data includes one product development program, three construction projects, as well as one organization development, hardware/software, and marketing project in their contexts. In two of the case studiesthe analytic hierarchy process is used to formulate a hierarchical model that returns a numerical evaluation of the degree of project success. It has its origin in the solution idea which in turn has its foundation in the notion of projectsuccess. The achieved results are condensed in the form of a process model thatintegrates project goal setting, success evaluation and decision making. The process of project goal setting is analysed as a part of an open system that includes a project, the business unit and its competitive environment. Four main constructs of factors are suggested. First, the project characteristics and requirements are clarified. The second and the third construct comprise the components of client/market segment attractiveness and sources of competitive advantage. Together they determine the competitive position of a business unit. Fourth, the relevant goals and the situation of a business unit are clarified to stress their contribution to the project goals. Empirical evidence is gained on the exploitation of increased knowledge and on the reaction to changes in the business environment during a project to ensure project success. The relevance of a successful project to a company or a business unit tends to increase the higher the reference level of project goals is set. However, normal performance or sometimes performance below this normal level is intentionally accepted. Success measures make project success quantifiable. There are result-oriented, process-oriented and resource-oriented success measures. The study also links result measurements to enablers that portray the key processes. The success measures can be classified into success domains determining the areas on which success is assessed. Empiricalevidence is gained on six success domains: strategy, project implementation, product, stakeholder relationships, learning situation and company functions. However, some project goals, like safety, can be assessed using success measures that belong to two success domains. For example a safety index is used for assessing occupational safety during a project, which is related to project implementation. Product safety requirements, in turn, are connected to the product characteristics and thus to the product-related success domain. Strategic success measures can be used to weave the project phases together. Empirical evidence on their static nature is gained. In order-oriented projects the project phases are oftencontractually divided into different suppliers or contractors. A project from the supplier's perspective can represent only a part of the ¿whole project¿ viewed from the client's perspective. Therefore static success measures are mostly used within the contractually agreed project scope and duration. Proof is also acquired on the dynamic use of operational success measures. They help to focus on the key issues during each project phase. Furthermore, it is shown that the original success domains and success measures, their weights and target values can change dynamically. New success measures can replace the old ones to correspond better with the emphasis of the particular project phase. This adjustment concentrates on the key decision milestones. As a conclusion, the study suggests a combination of static and dynamic success measures. Their linkage to an incentive system can make the project management proactive, enable fast feedback and enhancethe motivation of the personnel. It is argued that the sequence of effective decisions is closely linked to the dynamic control of project success. According to the used definition, effective decisions aim at adequate decision quality and decision implementation. The findings support that project managers construct and use a chain of key decision milestones to evaluate and affect success during aproject. These milestones can be seen as a part of the business processes. Different managers prioritise the key decision milestones to a varying degree. Divergent managerial perspectives, power, responsibilities and involvement during a project offer some explanation for this. Finally, the study introduces the use ofHard Gate and Soft Gate decision milestones. The managers may use the former milestones to provide decision support on result measurements and ad hoc critical conditions. In the latter milestones they may make intermediate success evaluation also on the basis of other types of success measures, like process and resource measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies gray-level distance transforms, particularly the Distance Transform on Curved Space (DTOCS). The transform is produced by calculating distances on a gray-level surface. The DTOCS is improved by definingmore accurate local distances, and developing a faster transformation algorithm. The Optimal DTOCS enhances the locally Euclidean Weighted DTOCS (WDTOCS) with local distance coefficients, which minimize the maximum error from the Euclideandistance in the image plane, and produce more accurate global distance values.Convergence properties of the traditional mask operation, or sequential localtransformation, and the ordered propagation approach are analyzed, and compared to the new efficient priority pixel queue algorithm. The Route DTOCS algorithmdeveloped in this work can be used to find and visualize shortest routes between two points, or two point sets, along a varying height surface. In a digital image, there can be several paths sharing the same minimal length, and the Route DTOCS visualizes them all. A single optimal path can be extracted from the route set using a simple backtracking algorithm. A new extension of the priority pixel queue algorithm produces the nearest neighbor transform, or Voronoi or Dirichlet tessellation, simultaneously with the distance map. The transformation divides the image into regions so that each pixel belongs to the region surrounding the reference point, which is nearest according to the distance definition used. Applications and application ideas for the DTOCS and its extensions are presented, including obstacle avoidance, image compression and surface roughness evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työn tutkimusongelma oli selventää, tutkia ja analysoida dynaamisen hinnoittelun tekijät ja mahdollisuudet tuotevarianttien hinnoittelussa. Tutkimusongelman selvittämiseksi työlle asetettiin 8 tavoitetta - Saada selville miksi tuotevarianttien hinnoittelu on ongelmallista - Esittää kuinka tuotevarianttien hinnat teoreettisesti tulisi asettaa - Tunnistaa tuotevarianttien hinnoittelun ulottuvuudet ja selvittää dynaamisen hinnoittelun edut staattiseen hinnoitteluun verrattuna - Esitellä analyysikehikko hinnoittelun tilan analysointiin - Tunnistaa dynaamisen hinnoittelun tuotevarianteille suomat mahdollisuudet - Etsiä soveltuvat hinnoittelumenetelmät tuotevarianttien dynaamiseen hinnoitteluun - Analysoida tuotevariantteja myyvän yrityksen hinnoittelu - Tunnistaa ja arvioida dynaamisen hinnoittelun edut yritykselle Diplomityössä käytettiin useita tutkimusmenetelmiä. Perustieto haettiin kirjallisuustutkimuksella ja sitä täydennettiin haastatteluilla. Tutkimusprosessi alkoi tutkimuksella tuotevarianttien hinnoittelusta ja kirjallisuuden perusteella luotiin näkökulma ja yleiset kehityssuunnat tarkempaa tutkimusta varten. Kaksi tärkeintä tuotevariaatioiden hinnoitteludimensiota tunnistettiin ja niiden analysointia varten luotiin nelikenttämalli. Kirjallisuustutkimuksen ja tarkemman kohdeyrityksen tarkastelun perusteella dynaaminen tuotelinjahinnoittelu on tuotevarianttien dynaamisen hinnoittelun tavoitetila. Nelikenttämallia käytettiin kohdeyrityksen hinnoittelun tilan arviointiin ja dynaamisen hinnoittelun suurimmat hyödyt löydettiin. Tutkimuksen päätulokset ovat - Hinnoittelun dynaamisuutta tulee tuotevarianteilla tutkia hinnoittelun älykkyyden ja kehittyneisyyden kanssa - Tuotevariaatioiden hinnoittelun tavoitetila on dynaaminen tuotelinjahinnoittelu - Hinnoittelun kehittäminen staattisesta dynaamiseen tuo huomattavia etuja - Tärkein etu on parempi hintojen hallinta ja mahdollisuus johtaa hintoja tehokkaasti. Tämän vuoksi hinnoitteluanalyysissa havaittiin selvästi lisääntyneitä voittoja - Hinnoittelun älykkyyden nostaminen hyödyttää yritystä ja saa aikaan lisäyksen voitoissa

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työn tavoitteena oli kartoittaa vaihtoehtoisia menetelmiä perinteiselle aikatasossa tapahtuvalle kulmastabiilisuuden mallinnukselle käyttövarmuustarkastelussa. Vaihtoehtoisiin menetelmiin tutustuttiin kirjallisuuden avulla ja valittiin menetelmä testattavaksi pohjoismaisessa yhteiskäyttöjärjestelmässä. Vaihtoehtoisen menetelmän ominaisuusvaatimuksiin kuului nopeampi laskenta, luotettava stabiilien ja epästabiilien tilanteiden seulontakyky ja menetelmän antama indeksi stabiilisuus-/epästabiilisuusasteesta. Pääasiassa menetelmät, joihin tutustuttiin, arvioivat vain transienttia stabiilisuutta. SIME-menetelmä soveltui myös dynaamisen stabiilisuuden arviointiin. Suomessa voi dynaamisella stabiilisuudella olla tulevaisuudessa merkittävä rooli käyttövarmuustarkasteluissa. SIME-menetelmän toimivuutta testattiin osin yksinkertaistetulla Nordel-verkkomallilla, ja saadut tulokset olivat lupaavia. Menetelmä täytti uudelle menetelmälle asetetut vaatimukset, vaikka ongelmiakin esiintyi. Testauksessa käytetyn menetelmän edelleen kehittäminen ja menetelmän testaaminen täydellisellä verkkomallilla on suositeltavaa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän tutkimuksen päätavoitteena oli arvioida, voidaanko opetusministeriön vahvistamia opetustoimen yksikköhintoja pitää taloudellisuuden mittarina. Tutkimukseni kohdistui peruskoulun ja lukion valtionosuuden oppilaskohtaisiin laskennallisiin yksikköhintoihin ja musiikkiopiston muun koulutuksen ja työväenopiston opetustuntikohtaisiin laskennallisiin yksikköhintoihin. Julkinen talous pyrkii toiminnassaan myös mahdollisimman tehokkaisiin ja taloudellisiin ratkaisuihin. Toiminnan taloudellisuuden määrittämiseen tarvitaan mittareita ja vertailutietoa analysointia varten. Tutkimuksen tavoitteisiin pyrittiin toiminta-analyyttisellä tutkimusotteella, mutta siinä on nähtävissä myös käsiteanalyyttisiä piirteitä. Opetustoimen valtion rahoituksen muuttuessa laskennalliseksi, valtionhallinto kumosi vanhoja sääntöjä, puhuttiin ´normipurusta´. Aikaisempi valtionosuus perustui toteutuneisiin kustannuksiin. Uudistuksella haluttiin korostaa kuntien itsenäisyyttä, taloudellista omavastuisuutta ja tehokkuutta palvelujen järjestämisessä. Lisäksi kustannuksissa oletettiin syntyvän säätöjä, joiden oli määrä koitua kuntien hyväksi, kun valtionosuus ei määräytynyt hyväksyttyjen menojen mukaan. Taloudellisuus merkitsee yksikkökustannusten tunnistamista ja analysointia. Kunnan taloustavoitteita on pyrittävä mittaamaan tunnusluvuilla. Niille on asetettava hyväksyttävät tavoitearvot. Tunnuslukujen arvot eivät sellaisenaan kerro, miten hyvä tai huono toiminnan tulos on. Vasta niiden vertailu ja analysointi antavat kuvan tästä. Kuntien oppilaskohtainen yksikköhinta soveltuu hyvin peruskoulun ja lukion taloudellisuuden mittariksi. Samoin opetustuntikohtainen yksikköhinta musiikkiopiston ja kansalaisopiston taloudellisuuden mittariksi. Kuntien keskinäinen kustannusvertailu ja analysointi antavat hyvän pohjan löytää kustannusten taloudellisuuden taso.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkielman tavoite: Tutkielman tavoitteena on kuvata ja arvioida erään organisaation muutosprosessia, jonka seurauksena virastomallista organisaatiota ollaan muuttamassa paremmin kilpailuille markkinoille sopivaan liikelaitosmuotoon. Tutkielmassa selvitetään, miten siirtyminen budjettitaloudesta ansaintatalouteen vaikuttaa tutkittavan organisaation johtamiseen, henkilöstön asenteisiin ja työtapoihin sekä koko organisaation toimivuuteen. Tutkielman ongelma on muotoiltu seuraavasti: Mikä motivoi ja sitouttaa virastoaikana työssä ollutta henkilöstöä muutokseen, jossa ”turvallisesta” virastomallista ollaan siirtymässä asteittain kilpailtujen markkinoiden epävarmuuteen? Tutkimusmenetelmä: Tutkielman tutkimusstrategiaksi valittiin kvantitatiivinen survey –tutkimus ja aineistonkeruumenetelmäksi lomakekysely. Tutkielman teoreettinen viitekehys muodostuu aikaisemmista tutkimuksista, jotka käsittelevät: muutoksen vaikutuksia liiketoimintaan, muutokseen sitoutumisista, muutoksen johtamista, yksilön asemaa muutoksessa sekä muutosta julkisella sektorilla. Tutkielman tulokset ja johtopäätökset: Tutkittavan organisaation virastoaikainen henkilöstö on tämän tutkielman perusteella erittäin sitoutunut organisaatioonsa ja erittäin motivoitunut tekemään työtä organisaation menestyksen eteen myös tulevaisuudessa. Eniten virastoaikaista henkilöstöä motivoi tekemään työtä uusissa olosuhteissa mahdollisuus oppia uusia asioita sekä onnistuneesti tehdyn työn tuottama hyvän olon tunne. Tutkielmassa myös selvisi, että organisaatioon sitoutumisen aste sekä suhtautuminen muutokseen ovat suoraan verrannollisia organisaatioasemaan. Mitä korkeammassa asemassa henkilö organisaatiossa on, sitä sitoutuneempi hän organisaatioon on ja sitä positiivisemmin hän muutokseen suhtautuu. Organisaatiomuutos on tutkittavassa organisaatiossa ollut vaikea prosessi ja se on vaikuttanut jokaiseen organisaation yksilöön. Henkilöstön työmäärä, vaatimukset työtä kohtaan, sisäinen kilpailu, kiire ja näistä kaikista johtuen myös stressi ovat lisääntyneet virastoajasta. Myös henkilöstöresursseja on muutoksen seurauksena jouduttu karsimaan. Kokonaisuudessaan organisaatiomuutos on henkilöstön mielestä ollut melko positiivinen asia ja sen on nähty parantaneen organisaation toiminnan tehokkuutta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with a hardware accelerated Java virtual machine, named REALJava. The REALJava virtual machine is targeted for resource constrained embedded systems. The goal is to attain increased computational performance with reduced power consumption. While these objectives are often seen as trade-offs, in this context both of them can be attained simultaneously by using dedicated hardware. The target level of the computational performance of the REALJava virtual machine is initially set to be as fast as the currently available full custom ASIC Java processors. As a secondary goal all of the components of the virtual machine are designed so that the resulting system can be scaled to support multiple co-processor cores. The virtual machine is designed using the hardware/software co-design paradigm. The partitioning between the two domains is flexible, allowing customizations to the resulting system, for instance the floating point support can be omitted from the hardware in order to decrease the size of the co-processor core. The communication between the hardware and the software domains is encapsulated into modules. This allows the REALJava virtual machine to be easily integrated into any system, simply by redesigning the communication modules. Besides the virtual machine and the related co-processor architecture, several performance enhancing techniques are presented. These include techniques related to instruction folding, stack handling, method invocation, constant loading and control in time domain. The REALJava virtual machine is prototyped using three different FPGA platforms. The original pipeline structure is modified to suit the FPGA environment. The performance of the resulting Java virtual machine is evaluated against existing Java solutions in the embedded systems field. The results show that the goals are attained, both in terms of computational performance and power consumption. Especially the computational performance is evaluated thoroughly, and the results show that the REALJava is more than twice as fast as the fastest full custom ASIC Java processor. In addition to standard Java virtual machine benchmarks, several new Java applications are designed to both verify the results and broaden the spectrum of the tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Construction of multiple sequence alignments is a fundamental task in Bioinformatics. Multiple sequence alignments are used as a prerequisite in many Bioinformatics methods, and subsequently the quality of such methods can be critically dependent on the quality of the alignment. However, automatic construction of a multiple sequence alignment for a set of remotely related sequences does not always provide biologically relevant alignments.Therefore, there is a need for an objective approach for evaluating the quality of automatically aligned sequences. The profile hidden Markov model is a powerful approach in comparative genomics. In the profile hidden Markov model, the symbol probabilities are estimated at each conserved alignment position. This can increase the dimension of parameter space and cause an overfitting problem. These two research problems are both related to conservation. We have developed statistical measures for quantifying the conservation of multiple sequence alignments. Two types of methods are considered, those identifying conserved residues in an alignment position, and those calculating positional conservation scores. The positional conservation score was exploited in a statistical prediction model for assessing the quality of multiple sequence alignments. The residue conservation score was used as part of the emission probability estimation method proposed for profile hidden Markov models. The results of the predicted alignment quality score highly correlated with the correct alignment quality scores, indicating that our method is reliable for assessing the quality of any multiple sequence alignment. The comparison of the emission probability estimation method with the maximum likelihood method showed that the number of estimated parameters in the model was dramatically decreased, while the same level of accuracy was maintained. To conclude, we have shown that conservation can be successfully used in the statistical model for alignment quality assessment and in the estimation of emission probabilities in the profile hidden Markov models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkimuksen kohteena ovat äitiydelle tuotetut kulttuuriset odotukset, joita tarkastellaan kahdella yhteiskunnallisella keskustelufoorumilla. Tutkimuksessa tarkastellaan yhtäältä lastensuojelun perhetyössä toimivien ammattilaisten ja toisaalta median puhetta äitiydestä. Tutkimuksen tavoitteena on tehdä näkyväksi vaihtoehtoisia tapoja konstruoida äitiyttä hyvänä tai riittämättömänä sekä haastaa pohtimaan erilaisten tulkintojen perusteita ja seurauksia lastensuojelutyössä. Kulttuuriset, äitiyttä koskevat odotukset vaikuttavat myös siihen, miten äitiys henkilökohtaisella tasolla koetaan. Äitiyden kulttuurista määrittelyä analysoidaan kahdesta tekstiaineistosta. Yhtenä aineistona ovat Stakesissa vuonna 1999 toteutetun Perhetyöprojektin yhteydessä kerätyt, lastensuojelussa toimivien perhetyöammattilaisten ryhmäkeskustelut. Toisena aineistona on projektin ajankohtana ilmestyneistä suomalaisista naisten- ja perhelehdistä (Kotiliesi, Anna, Kaksplus) kerätyt äitien haastattelut. Tutkimuksessa kysytään 1) Mihin ammattilaisten äitejä koskeva huolipuhe kiinnittyy ja millaisia kulttuurisia äitiyden odotuksia se konstruoi? 2) Millaisia äitiyden odotuksia median äitihaastattelut konstruoivat? 3) Millaisen äitiyden odotushorisontin nämä puhekäytännöt yhdessä tuottavat? Analyysin teoreettis-metodologisina kulmakivinä ovat sosiaalinen konstruktionismi ja feministinen tietokäsitys. Analyysimenetelmänä on laadullinen, aineistojen ehdoilla etenevä, feministisesti ja kriittisesti sävyttynyt lukutapa, joka hyödyntää teemoittelun, diskurssianalyysin ja feministisen metodologian ideoita ja käsitteitä. Analysoitavana olevissa keskusteluissa äitiyttä konstruoidaan lapsen tarpeiden (ammattilaiset) ja naisen tarpeiden (media) näkökulmista. Ammattilaiset puhuvat tilanteista, joissa äitien toiminta rikkoo kulttuurista hyvän äidin kuvaa, vaarantaa lapsen hyvinvointia ja äitiyteen joudutaan puuttumaan ammatillisesti. Ammattilaisten tulkinnat kuvaavat taitavaa lapsen edun näkökulmasta tehtyä arviointia, jonka kiintopisteenä ovat äidit yksilöllisine ominaisuuksineen ja piirteineen. Ammatillisen huolipuheen keskiössä ovat äidin vuorovaikutussuhteet sekä äidin tunteet, käyttäytyminen ja asenteet. Riittävää äitiyttä konstruoi kodin luominen, kiintymyssuhteen rakentaminen ja lapsen ensisijaiseksi asettaminen. Sen sijaan vaikuttaa siltä, ettei äitiyden arviointia juurikaan tehdä suhteessa äidin muihin identiteetteihin tai äitiyden toteuttamisen kontekstiin. Paikoin ammattilaisten tulkinnat heijastavat myös stereotyyppisiä ja idealistisia odotuksia, joita vasten äitiyttä arvioidaan. Tällaiset piirteet voivat kertoa siitä, että äitien avuntarpeet jäävät lastensuojelutyössä kohtaamatta ja ymmärtämättä. Mediapuhe äitiydestä käydään naiseuden ja äitiyden mallien antamisen kontekstissa. Puheen keskiössä ovat mediajulkisuuteen päässeiden naisten äidiksi tuloon ja äitiyden toteuttamiseen liittyvät valinnat ja käyttäytyminen. Mediapuhe on puhetta kulttuuristen ja ammatillisten äitiyden odotusten rikkomisesta, uudelleen tulkinnasta ja niiden muovaamisesta itselle sopiviksi. Mediapuheessa hyvää äitiyttä konstruoi äidin itsenäisyys ja oma aika, sosiaalisen elämän rikkaus, ammatillinen identiteetti ja persoonalliset valinnat. Aineistojen kautta rakentuu moninaisten ja ristiriitaisten, äitejä eri suuntaan vetävien kulttuuristen odotusten kirjo. Odotukset jäsentyvät neljälle ulottuvuudelle: 1) lapselle omistautuva – itseään toteuttava, 2) emotionaalinen side – rationaalinen tehtävä, 3) odotuksia toteuttava – omaehtoinen, 4) itsenäinen - äitiyttä jakava. Äitiyden toteuttaminen kulttuurisesti ”oikein” on näiden odotusten välissä tasapainoilua. Ulottuvuuksien kautta esille tulevat kaksoisviestit voivat heikentää äitien itsetuntoa, tuottaa riittämättömyyden tunteita tai yllyttää suorittamaan äitiyttä. Myös äitiyden ammatillinen tukeminen edellyttää tasapainoilua, jottei äitejä idealisoida tai syyllistetä kulttuurisia odotuksia vasten.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän diplomityön päätavoite on selvittää keinoja Woikoski Oy:n jakelulogistiikan kustannustehokkuuden kehittämiseksi. Päätavoite voidaan jakaa osatavoitteisiin, joita ovat varastonohjauksen kehittäminen ja jakelun kuljetustoimintojen tehostaminen. Varastoinnin nykytilaa arvioitiin ABC-analyysin avulla. Saatujen tulosten pohjalta määritettiin luokkakohtaiset varastonohjaussäännöt ja kiertonopeustavoitteet, jotka yhdessä muodostavat yritykselle varastopolitiikan. Havaitut ongelmat kulminoituivat jatkuvan varastoseurannan puuttumiseen, suuriin varastotasoihin sekä kysynnänhallinnan puutteisiin. Toimenpidesuosituksina yrityksen tulee investoida tehokkaaseen varastonhallintajärjestelmään sekä aloittaa systemaattinen kysynnän ennustaminen. Kuljetuksiin liittyviksi toimenpiteiksi ehdotetaan runko- ja jakelukuljetusten reittien sekä jakelualueiden optimointia ja tuloksena saadun jakelumallin käyttöönottoa. Nykyistä jakelurahtien hinnoittelua suositellaan tarkennettavaksi siten, että jakelukustannuksia ja toimitusmääriä eritellään nykyistä tarkemmin ja asiakkaille kohdistetaan rahtikustannuksia todenmukaisin perustein. Lisäksi logistiikkatoimintaa tulee yhtenäistää eri toimipaikkojen kesken ja toimintojen mittaamista on tehostettava organisaatiossa.