977 resultados para Worst-case dimensioning
Resumo:
A basic prerequisite for in vivo X-ray imaging of the lung is the exact determination of radiation dose. Achieving resolutions of the order of micrometres may become particularly challenging owing to increased dose, which in the worst case can be lethal for the imaged animal model. A framework for linking image quality to radiation dose in order to optimize experimental parameters with respect to dose reduction is presented. The approach may find application for current and future in vivo studies to facilitate proper experiment planning and radiation risk assessment on the one hand and exploit imaging capabilities on the other.
Resumo:
Around 11.5 * 106 m3 of rock detached from the eastern slope of the Santa Cruz valley (San Juan province, Argentina) in the first fortnight of January 2005. The rockslide?debris avalanche blocked the course, resulting in the development of a lake with maximum length of around 3.5 km. The increase in the inflow rate from 47,000?74,000 m3/d between April and October to 304,000 m3/d between late October and the first fortnight of November, accelerated the growing rate of the lake. On 12 November 2005 the dam failed, releasing 24.6 * 106 m3 of water. The resulting outburst flood caused damages mainly on infrastructure, and affected the facilities of a hydropower dam which was under construction 250 km downstream from the source area. In this work we describe causes and consequences of the natural dam formation and failure, and we dynamically model the 2005 rockslide?debris avalanche with DAN3D. Additionally, as a volume ~ 24 * 106 m3of rocks still remain unstable in the slope, we use the results of the back analysis to forecast the formation of a future natural dam. We analyzed two potential scenarios: a partial slope failure of 6.5 * 106 m3 and a worst case where all the unstable volume remaining in the slope fails. The spreading of those potential events shows that a new blockage of the Santa Cruz River is likely to occur. According to their modeled morphometry and the contributing watershed upstream the blockage area, as the one of 2005, the dams would also be unstable. This study shows the importance of back and forward analysis that can be carried out to obtain critical information for land use planning, hazards mitigation, and emergency management.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
Debris flows are among the most dangerous processes in mountainous areas due to their rapid rate of movement and long runout zone. Sudden and rather unexpected impacts produce not only damages to buildings and infrastructure but also threaten human lives. Medium- to regional-scale susceptibility analyses allow the identification of the most endangered areas and suggest where further detailed studies have to be carried out. Since data availability for larger regions is mostly the key limiting factor, empirical models with low data requirements are suitable for first overviews. In this study a susceptibility analysis was carried out for the Barcelonnette Basin, situated in the southern French Alps. By means of a methodology based on empirical rules for source identification and the empirical angle of reach concept for the 2-D runout computation, a worst-case scenario was first modelled. In a second step, scenarios for high, medium and low frequency events were developed. A comparison with the footprints of a few mapped events indicates reasonable results but suggests a high dependency on the quality of the digital elevation model. This fact emphasises the need for a careful interpretation of the results while remaining conscious of the inherent assumptions of the model used and quality of the input data.
Resumo:
This study was precipitated by several failures of flexible pipe culverts due to apparent inlet floatation. A survey of Iowa County Engineers revealed 31 culvert failures on pipes greater than 72" diameter in eight Iowa counties within the past five years. No special hydrologic, topography, and geotechnical environments appeared to be more susceptible to failure. However, most failures seemed to be on pipes flowing in inlet control. Geographically, most of the failures were in the southern and western sections of Iowa. The forces acting on a culvert pipe are quantified. A worst case scenario, where the pipe is completely plugged, is evaluated to determine the magnitude of forces that must be resisted by a tie down or headwall. Concrete headwalls or slope collars are recommended for most pipes over 4 feet in diameter.
Resumo:
BACKGROUND: The long-term outcome of antiretroviral therapy (ART) is not assessed in controlled trials. We aimed to analyse trends in the population effectiveness of ART in the Swiss HIV Cohort Study over the last decade. METHODS: We analysed the odds of stably suppressed viral load (ssVL: three consecutive values <50 HIV-1 RNA copies/mL) and of CD4 cell count exceeding 500 cells/μL for each year between 2000 and 2008 in three scenarios: an open cohort; a closed cohort ignoring the influx of new participants after 2000; and a worst-case closed cohort retaining lost or dead patients as virological failures in subsequent years. We used generalized estimating equations with sex, age, risk, non-White ethnicity and era of starting combination ART (cART) as fixed co-factors. Time-updated co-factors included type of ART regimen, number of new drugs and adherence to therapy. RESULTS: The open cohort included 9802 individuals (median age 38 years; 31% female). From 2000 to 2008, the proportion of participants with ssVL increased from 37 to 64% [adjusted odds ratio (OR) per year 1.16 (95% CI 1.15-1.17)] and the proportion with CD4 count >500 cells/μL increased from 40 to >50% [OR 1.07 (95% CI 1.06-1.07)]. Similar trends were seen in the two closed cohorts. Adjustment did not substantially affect time trends. CONCLUSIONS: There was no relevant dilution effect through new participants entering the open clinical cohort, and the increase in virological/immunological success over time was not an artefact of the study design of open cohorts. This can partly be explained by new treatment options and other improvements in medical care.
Resumo:
Venous cannula orifice obstruction is an underestimated problem during augmented cardiopulmonary bypass (CPB), which can potentially be reduced with redesigned, virtually wall-less cannula designs versus traditional percutaneous control venous cannulas. A bench model, allowing for simulation of the vena cava with various affluent orifices, venous collapse and a worst case scenario with regard to cannula position, was developed. Flow (Q) was measured sequentially for right atrial + hepatic + renal + iliac drainage scenarios, using a centrifugal pump and an experimental bench set-up (afterload 60 mmHg). At 1500, 2000 and 2500 RPM and atrial position, the Q values were 3.4, 6.03 and 8.01 versus 0.77*, 0.43* and 0.58* l/min: p<0.05* for wall-less and the Biomedicus(®) cannula, respectively. The corresponding pressure values were -15.18, -31.62 and -74.53 versus -46.0*, -119.94* and -228.13* mmHg. At the hepatic position, the Q values were 3.34, 6.67 and 9.26 versus 2.3*, 0.42* and 0.18* l/min; and the pressure values were -10.32, -20.25 and -42.83 versus -23.35*, -119.09* and -239.38* mmHg. At the renal position, the Q values were 3.43, 6.56 and 8.64 versus 2.48*, 0.41* and 0.22* l/min and the pressure values were -9.64, -20.98 and -63.41 versus -20.87 -127.68* and -239* mmHg, respectively. At the iliac position, the Q values were 3.43, 6.01 and 9.25 versus 1.62*, 0.55* and 0.58* l/min; the pressure values were -9.36, -33.57 and -44.18 versus -30.6*, -120.27* and -228* mmHg, respectivly. Our experimental evaluation demonstrates that the redesigned, virtually wall-less cannulas, allowing for direct venous drainage at practically all intra-venous orifices, outperform the commercially available control cannula, with superior flow at reduced suction levels for all scenarios tested.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
Random problem distributions have played a key role in the study and design of algorithms for constraint satisfaction and Boolean satisfiability, as well as in ourunderstanding of problem hardness, beyond standard worst-case complexity. We consider random problem distributions from a highly structured problem domain that generalizes the Quasigroup Completion problem (QCP) and Quasigroup with Holes (QWH), a widely used domain that captures the structure underlying a range of real-world applications. Our problem domain is also a generalization of the well-known Sudoku puz- zle: we consider Sudoku instances of arbitrary order, with the additional generalization that the block regions can have rectangular shape, in addition to the standard square shape. We evaluate the computational hardness of Generalized Sudoku instances, for different parameter settings. Our experimental hardness results show that we can generate instances that are considerably harder than QCP/QWH instances of the same size. More interestingly, we show the impact of different balancing strategies on problem hardness. We also provide insights into backbone variables in Generalized Sudoku instances and how they correlate to problem hardness.
Resumo:
The strength properties of paper coating layer are very important in converting and printing operations. Too great or low strength of the coating can affect several problems in printing. One of the problems caused by the strength of coating is the cracking at the fold. After printing the paper is folded to final form and the pages are stapled together. In folding the paper coating can crack causing aesthetic damage over printed image or in the worst case the centre sheet can fall off in stapling. When folding the paper other side undergoes tensile stresses and the other side compressive stresses. If the difference between these stresses is too high, the coating can crack on the folding. To better predict and prevent cracking at the fold it is good to know the strength properties of coating layer. It has measured earlier the tensile strength of coating layer but not the compressive strength. In this study it was tried to find some way to measure the compressive strength of the coating layer and investigate how different coatings behave in compression. It was used the short span crush test, which is used to measure the in-plane compressive strength of paperboards, to measure the compressive strength of the coating layer. In this method the free span of the specimen is very small which prevent buckling. It was measured the compressive strength of free coating films as well as coated paper. It was also measured the tensile strength and the Bendtsen air permeance of the coating film. The results showed that the shape of pigment has a great effect to the strength of coating. Platy pigment gave much better strength than round or needle-like pigment. On the other hand calcined kaolin, which is also platy but the particles are aggregated, decreased the strength substantially. The difference in the strength can be explained with packing of the particles which is affecting to the porosity and thus to the strength. The platy kaolin packs up much better than others and creates less porous structure. The results also showed that the binder properties have a great effect to the compressive strength of coating layer. The amount of latex and the glass transition temperature, Tg, affect to the strength. As the amount of latex is increasing, the strength of coating is increasing also. Larger amount of latex is binding the pigment particles better together and decreasing the porosity. Compressive strength was increasing when the Tg was increasing because the hard latex gives a stiffer and less elastic film than soft latex.
Resumo:
The purpose of this study was to investigate the effects of information and communication technology (ICT) on school from teachers’ and students’ perspectives. The focus was on three main subject matters: on ICT use and competence, on teacher and school community, and on learning environment and teaching practices. The study is closely connected to the national educational policy which has aimed strongly at supporting the implementation of ICT in pedagogical practices at all institutional levels. The phenomena were investigated using a mixed methods approach. The qualitative data from three cases studies and the quantitative data from three statistical studies were combined. In this study, mixed methods were used to investigate the complex phenomena from various stakeholders’ points of view, and to support validation by combining different perspectives in order to give a fuller and more complete picture of the phenomena. The data were used in a complementary manner. The results indicate that the technical resources for using ICT both at school and at homes are very good. In general, students are capable and motivated users of new technology; these skills and attitudes are mainly based on home resources and leisuretime use. Students have the skills to use new kinds of applications and new forms of technology, and their ICT skills are wide, although not necessarily adequate; the working habits might be ineffective and even wrong. Some students have a special kind of ICT-related adaptive expertise which develops in a beneficial interaction between school guidance and challenges, and individual interest and activity. Teachers’ skills are more heterogeneous. The large majority of teachers have sufficient skills for everyday and routine working practices, but many of them still have difficulties in finding a meaningful pedagogical use for technology. The intensive case study indicated that for the majority of teachers the intensive ICT projects offer a possibility for learning new skills and competences intertwined in the work, often also supported by external experts and a collaborative teacher community; a possibility that “ordinary” teachers usually do not have. Further, teachers’ good ICT competence help them to adopt new pedagogical practices and integrate ICT in a meaningful way. The genders differ in their use of and skills in ICT: males show better skills especially in purely technical issues also in schools and classrooms, whereas female students and younger female teachers use ICT in their ordinary practices quite naturally. With time, the technology has become less technical and its communication and creation affordances have become stronger, easier to use, more popular and motivating, all of which has increased female interest in the technology. There is a generation gap in ICT use and competence between teachers and students. This is apparent especially in the ICT-related pedagogical practices in the majority of schools. The new digital affordances not only replace some previous practices; the new functionalities change many of our existing conceptions, values, attitudes and practices. The very different conceptions that generations have about technology leads, in the worst case, to a digital gap in education; the technology used in school is boring and ineffective compared to the ICT use outside school, and it does not provide the competence needed for using advanced technology in learning. The results indicate that in schools which have special ICT projects (“ICT pilot schools”) for improving pedagogy, these have led to true changes in teaching practices. Many teachers adopted student-centred and collaborative, inquiry-oriented teaching practices as well as practices that supported students' authentic activities, independent work, knowledge building, and students' responsibility. This is, indeed, strongly dependent on the ICT-related pedagogical competence of the teacher. However, the daily practices of some teachers still reflected a rather traditional teacher-centred approach. As a matter of fact, very few teachers ever represented solely, e.g. the knowledge building approach; teachers used various approaches or mixed them, based on the situation, teaching and learning goals, and on their pedagogical and technical competence. In general, changes towards pedagogical improvements even in wellorganised developmental projects are slow. As a result, there are two kinds of ICT stories: successful “ICT pilot schools” with pedagogical innovations related to ICT and with school community level agreement about the visions and aims, and “ordinary schools”, which have no particular interest in or external support for using ICT for improvement, and in which ICT is used in a more routine way, and as a tool for individual teachers, not for the school community.
Resumo:
In der Leistungselektronik spielt die Kenntnis des Wärmevrhaltens einer Platine eine sehr große Rolle. Die immer größeren Leistungsdichten unterstreichen die Wichtigkeit des Kenntnisses des Wärmeverhaltens. In der Platine funktionieren die Leistungskomponenten und Kupferlagen die große Ströme tragen als Leistungsquellen. Das Isolationsmaterial zwischen den Kupferlagen limitiert die maximale Temperatur der Platine. Dieses bringt eine Grentzung für den maximalen Strom der durch die Platine geführt werden kann. In dieser Arbeit wurden die maximalen Stromdichten im Worst-Case-Szenario einer Platine untersucht. Dafür wurde eine Testplatine entworfen und für die Testplatine ein thermisches Modell konstruiert. Die Effekte von Kühlung wurden auch untersucht. Die Bestimmtheit des Modells wurde mit Messungen überprüft.
Resumo:
Tässä pro gradu -tutkielmassa tarkastellaan henkilöstöresurssien organisointia suuren alusöljyvahingon torjuntatilanteessa. Tavoitteena on suunnitella Suomenlahden rannikon öljyntorjunnasta vastaavien viranomaisten käyttöön optimaalinen rekrytointistrategia pahimman todennäköisen alusöljyvahingon varalle. Tutkimuksessa selvitetään myös millainen työsopimus öljyntorjuntatyöntekijöiden kanssa voidaan sopia. Näiden lisäksi etsitään vastausta siihen,kuinka työvoima saadaan pidettyä. Tämän laadullisen tutkimuksen teoreettinen osuus toteutettiin kirjallisuuskatsauksena. Tutkimuksen empiirinen aineisto kerättiin haastattelemalla yhdeksää asiantuntijaa syksyn 2009 aikana. Haastattelut olivat muodoltaan puolistrukturoituja teemahaastatteluja. Tutkimustulosten mukaan merkittävin lisätyövoiman tarve ilmenee käsin tehtävässä rantapuhdistustyössä. Etenkin puhdistustyön pitkittyessä pelastusviranomaiset tarvitsevat avukseen ulkopuolista työvoimaa. Rekrytointi suoritetaan muutamien viikkojen kuluessa öljyvahingon aiheutumisen jälkeen. Alueellinen pelastuslaitos suorittaa rekrytoinnin käyttäen tehokkaita, laajan kohderyhmän tavoittavia rekrytointiviestinnän välineitä (esim. sanomalehdet, TV ja Internet). Työntekijöiden kanssa sovitaan määräaikainen, Kunnallista virkaja työehtosopimusta noudattava työsopimus. Tärkeimpinä puhdistustyöntekijöitä motivoivina tekijöinä nähdään työn merkitys yhteiskunnalle, selkeästi määritelty, saavutettavissa oleva tavoite sekä palaute tehdystä työstä.
Resumo:
Maahanmuuttajien määrä on Suomessa merkittävämmin lisääntynyt vasta 1990- ja 2000-luvuilla. Vuonna 2010 Suomessa asui lähes 170 000 ulkomaan kansalaista. Tavallisimmin Suomeen muutetaan avioliiton, paluumuuton tai pakolaisuuden vuoksi. Pieni, joskin kasvava joukko muuttaa työn tai opiskelun vuoksi. Myös kansalaisuuksien, koulutustaustojen, ammattien jne. kirjo muuttajien joukossa on suuri. Ulkomaan kansalaisten lisääntyessä Suomessa on jouduttu kohtaamaan monenlaisia maahanmuuttoon liittyviä haasteita, joista työllistymiseen liittyvät kysymykset eivät ole vähäisimpiä. Tutkimuksessa tarkastellaan lähtömaassaan korkeakoulututkinnon suorittaneiden maahanmuuttajien työllistymistä ja työuran alkua Suomessa. Tutkimuksen tarkoituksena on selvittää, miten korkeakoulutetut maahanmuuttajat ovat Suomessa työllistyneet, minkälaisia heidän työuriensa alut ovat Suomessa olleet ja miten he ovat onnistuneet uudessa maassa hyödyntämään lähtömaassa hankkimaansa koulutusta. Lisäksi tutkimuksessa tarkastellaan, miten maahanmuuttajien lähtömaan erilaiset elämäntilanteet, olosuhteet ja valinnat ovat vaikuttaneet työuran muotoutumiseen Suomessa. Tutkimuksen aineisto muodostuu kysely- sekä haastatteluaineistosta. Kyselyaineiston (n=99) tarkoituksena on luoda määrällistä kuvaa korkeakoulutettujen maahanmuuttajien työllistymisestä Suomessa. Numerotietojen taakse jää kuitenkin näkymättömiin tieto maahanmuuttajien yksilöllisistä kokemuksista liittyen työllistymiseen ja työuran muotoutumiseen uudessa maassa. Toisena aineistona hyödynnettävän elämäkerrallisen haastatteluaineiston (n=20) kautta on mahdollista tehdä näkyväksi ne tutkittavien yksilölliset työ- ja koulutusuraan liittyvät valinnat, joita maahanmuuttajat ovat tehneet niin lähtömaassa kuin Suomessa sekä ne tilanteet ja olosuhteet, joissa tutkittavat ovat lähtömaassa eläneet ja joiden pohjalta he ovat tulleet Suomeen ja Suomen työmarkkinoille. Aineistoissa mukana olevat maahanmuuttajat olivat pääosin avioliiton, paluumuuton sekä pakolaisuuden vuoksi Suomeen tulleita. Vain muutama oli tullut työn vuoksi. Maahanmuuttajien työmarkkina-asemaa selitetään usein maahanmuuttajien resursseilla, kuten kielitaidolla, koulutuksella, työkokemuksella, sosiaalisten suhteiden ja verkostojen laadulla ja määrällä jne. Myös maahanmuuttajiin kohdistuvilla syrjivillä ja ennakkoluuloisilla asenteilla on keskeinen merkitys työllistymisessä. Koulutuksen ollessa yksi keskeisimmistä työmarkkina-asemaa määräävistä tekijöistä, tulisi koulutettujen maahanmuuttajien sijoittua hankitun tutkinnon oikeuttamiin tehtäviin. Tutkimuksessa kuitenkin havaittiin, että työllistyminen oli maahanmuuttajilla vaikeaa hyvästä koulutuksesta huolimatta. Kyselyaineistoon vastanneista vain muutama (6 %) oli työssä heti Suomeen muuttovuoden lopussa, kolme vuotta Suomessa asuttuaan työssä oli runsas kolmannes (35 %) ja aineistonkeruuhetkellä eli vuonna 2004 työssä oli 38 % vastaajista. Työsuhteet olivat tutkittavilla useimmiten määräaikaisia ja kestoltaan lyhyitä. Lisäksi työurat koostuivat runsaasta työttömyydestä sekä koulutukseen osallistumisesta. Myönteistä kuitenkin oli, että mikäli korkeakoulutetut maahanmuuttajat onnistuivat Suomessa työllistymään, vastasi työ usein joko kokonaan tai ainakin osittain hankittua korkeakoulututkintoa. Korkeakoulutettujen maahanmuuttajien työuran alut Suomessa voidaan tyypitellä kolmeen ryhmään, joista kukin jakaantui vielä kahteen alaryhmään siten, että kaiken kaikkiaan saatiin kuusi erilaista työuran alun tyyppiä: koulutusta vastaava vakaa ja vakiintuva ura, koulutusta osittain vastaava sekaura ja laskeva ura sekä koulutusta vastaamaton sisääntuloura ja työttömän ura . Haastatteluaineiston kautta tarkastellaan korkeakoulutettujen maahanmuuttajien yksilöllisiä elämänuria lähtien liikkeelle korkeakoulutettujen maahanmuuttajien lähtömaassa tekemistä ura- ja ammatinvalinnoista jatkuen Suomeen muuton kautta aina työuran muotoutumiseen Suomessa. Haastatteluja toisistaan erottelevina keskeisinä teemoina olivat toisaalta pärjääminen Suomessa ja suomalaisilla työmarkkinoilla toisaalta elämän muotoutuminen lähtömaassa ja nimenomaan siellä tehdyt ura- ja ammatinvalinnat ja niihin liittyvät kokemukset ja elämäntilanteet. Näiden kriteerien pohjalta aineisto jakaantui kolmeen ryhmään, jotka nimettiin pärjääjiksi, harhailijoiksi ja sinnittelijöiksi. Pärjääjien kertomukset muotoutuivat tietyllä tavalla myönteisen kehän kautta: niin lähtömaassa tehdyt ammatinvalinnat kuin työuran muotoutuminen Suomessa tapahtuivat suhteellisen vaivattomasti. Useimmiten työt Suomessa vastasivat lähtömaassa hankittua koulutusta. Omiin uravalintoihin oltiin myöhemmin myös tyytyväisiä. Harhailijoille oman paikan löytyminen oli puolestaan hankalampaa. Leimallista tälle ryhmälle oli tietynlainen valintojen vaikeus sekä tyytymättömyys omiin aikaisemmin tehtyihin ratkaisuihin. Jotkut harmittelivat nuorena tekemiään uravalintoja niin, että päättivät Suomessa hankkia kokonaan uuden ammatin. Muutto Suomeen merkitsi useimmille ammatillisen aseman laskua. Sinnittelijät kertoivat jo lähtökohdiltaan kahteen muuhun ryhmään nähden hyvin erilaista tarinaa. Tämän ryhmän lähes koko elämä lähtömaassa oli sodan ja levottomuuksien sävyttämää. Tämä näkyi myös ammatinvalinnassa: opiskelupaikka oli saatettu valita esimerkiksi sen perusteella, missä oli milloinkin turvallista opiskella. Myös Suomeen muutto erosi kahdesta aikaisemmasta ryhmästä, sillä lähtö entisestä kotimaasta oli tapahtunut usein hyvinkin yllättäen vailla etukäteissuunnittelua tilanteiden kärjistyttyä nopeasti. Suomessa työelämään pääseminen oli kaikille sinnittelijöille vaikeaa ja haastatteluhetkellä useilla vielä hyvin alkutekijöissä. Hyväkään koulutus ei aina takaa maahanmuuttajille työtä uudessa maassa, sillä hankittua tutkintoa ja osaamista ei ole helppo siirtää maasta toiseen. Pahimmassa tapauksessa vieraassa maassa suoritettu korkeakoulututkinto voi kokonaan mitätöityä uudessa maassa ja korkeakoulututkinnon myötä hankittu osaaminen menettää täysin arvonsa. Kyse on niin yksilön kuin yhteiskunnankin resurssien tuhlaamisesta tilanteessa, jossa maassa pysyvästi asuvat koulutetut maahanmuuttajat työskentelevät tavalla tai toisella koulutustaan vastaamattomissa epävakaissa töissä, työmarkkinoiden laitamilla tai ovat kokonaan työmarkkinoiden ulkopuolella.
Resumo:
Atherosclerosis is a life-long vascular inflammatory disease and the leading cause of death in Finland and in other western societies. The development of atherosclerotic plaques is progressive and they form when lipids begin to accumulate in the vessel wall. This accumulation triggers the migration of inflammatory cells that is a hallmark of vascular inflammation. Often, this plaque will become unstable and form vulnerable plaque which may rupture causing thrombosis and in the worst case, causing myocardial infarction or stroke. Identification of these vulnerable plaques before they rupture could save lives. At present, in the clinic, there exists no appropriated, non-invasive method for their identification. The aim of this thesis was to evaluate novel positron emission tomography (PET) probes for the detection of vulnerable atherosclerotic plaques and to characterize, two mouse models of atherosclerosis. These studies were performed by using ex vivo and in vivo imaging modalities. The vulnerability of atherosclerotic plaques was evaluated as expression of active inflammatory cells, namely macrophages. Age and the duration of high-fat diet had a drastic impact on the development of atherosclerotic plaques in mice. In imaging of atherosclerosis, 6-month-old mice, kept on high-fat diet for 4 months, showed matured, metabolically active, atherosclerotic plaques. [18F]FDG and 68Ga were accumulated in the areas representative of vulnerable plaques. However, the slow clearance of 68Ga limits its use for the plaque imaging. The novel synthesized [68Ga]DOTA-RGD and [18F]EF5 tracers demonstrated efficient uptake in plaques as compared to the healthy vessel wall, but the pharmacokinetic properties of these tracers were not optimal in used models. In conclusion, these studies resulted in the identification of new strategies for the assessment of plaque stability and mouse models of atherosclerosis which could be used for plaque imaging. In the used probe panel, [18F]FDG was the best tracer for plaque imaging. However, further studies are warranted to clarify the applicability of [18F]EF5 and [68Ga]DOTA-RGD for imaging of atherosclerosis with other experimental models.