41 resultados para NOES- Nose Only Exposure System
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The general trend towards increasing e ciency and energy density drives the industry to high-speed technologies. Active Magnetic Bearings (AMBs) are one of the technologies that allow contactless support of a rotating body. Theoretically, there are no limitations on the rotational speed. The absence of friction, low maintenance cost, micrometer precision, and programmable sti ness have made AMBs a viable choice for highdemanding applications. Along with the advances in power electronics, such as signi cantly improved reliability and cost, AMB systems have gained a wide adoption in the industry. The AMB system is a complex, open-loop unstable system with multiple inputs and outputs. For normal operation, such a system requires a feedback control. To meet the high demands for performance and robustness, model-based control techniques should be applied. These techniques require an accurate plant model description and uncertainty estimations. The advanced control methods require more e ort at the commissioning stage. In this work, a methodology is developed for an automatic commissioning of a subcritical, rigid gas blower machine. The commissioning process includes open-loop tuning of separate parts such as sensors and actuators. The next step is to apply a system identi cation procedure to obtain a model for the controller synthesis. Finally, a robust model-based controller is synthesized and experimentally evaluated in the full operating range of the system. The commissioning procedure is developed by applying only the system components available and a priori knowledge without any additional hardware. Thus, the work provides an intelligent system with a self-diagnostics feature and an automatic commissioning.
Resumo:
A high-speed and high-voltage solid-rotor induction machine provides beneficial features for natural gas compressor technology. The mechanical robustness of the machine enables its use in an integrated motor-compressor. The technology uses a centrifugal compressor, which is mounted on the same shaft with the high-speed electrical machine driving it. No gearbox is needed as the speed is determined by the frequency converter. The cooling is provided by the process gas, which flows through the motor and is capable of transferring the heat away from the motor. The technology has been used in the compressors in the natural gas supply chain in the central Europe. New areas of application include natural gas compressors working at the wellheads of the subsea gas reservoir. A key challenge for the design of such a motor is the resistance of the stator insulation to the raw natural gas from the well. The gas contains water and heavy hydrocarbon compounds and it is far harsher than the sales gas in the natural gas supply network. The objective of this doctoral thesis is to discuss the resistance of the insulation to the raw natural gas and the phenomena degrading the insulation. The presence of partial discharges is analyzed in this doctoral dissertation. The breakdown voltage of the gas is measured as a function of pressure and gap distance. The partial discharge activity is measured on small samples representing the windings of the machine. The electrical field behavior is also modeled by finite element methods. Based on the measurements it has been concluded that the discharges are expected to disappear at gas pressures above 4 – 5 bar. The disappearance of discharges is caused by the breakdown strength of the gas, which increases as the pressure increases. Based on the finite element analysis, the physical length of a discharge seen in the PD measurements at atmospheric pressure was approximated to be 40 – 120 m. The chemical aging of the insulation when exposed to raw natural gas is discussed based on a vast set of experimental tests with the gas mixture representing the real gas mixture at the wellhead. The mixture was created by mixing dry hydrocarbon gas, heavy hydrocarbon compounds, monoethylene glycol, and water. The mixture was chosen to be more aggressive by increasing the amount of liquid substances. Furthermore, the temperature and pressure were increased, which resulted in accelerated test conditions. The time required to detect severe degradation was thus decreased. The test program included a comparison of materials, an analysis of the e ects of di erent compounds in the gas mixture, namely water and heavy hydrocarbons, on the aging, an analysis of the e ects of temperature and exposure duration, and also an analysis on the e ect of sudden pressure changes on the degradation of the insulating materials. It was found in the tests that an insulation consisting of mica, glass, and epoxy resin can tolerate the raw natural gas, but it experiences some degradation. The key material in the composite insulation is the resin, which largely defines the performance of the insulation system. The degradation of the insulation is mostly determined by the amount of gas mixture di used into it. The di usion was seen to follow Fick’s second law, but the coe cients were not accurately defined. The di usion was not sensitive to temperature, but it was dependent upon the thermodynamic state of the gas mixture, in other words, the amounts of liquid components in the gas. The weight increase observed was mostly related to heavy hydrocarbon compounds, which act as plasticizers in the epoxy resin. The di usion of these compounds is determined by the crosslink density of the resin. Water causes slight changes in the chemical structure, but these changes do not significantly contribute to the aging phenomena. Sudden changes in pressure can lead to severe damages in the insulation, because the motion of the di used gas is able to create internal cracks in the insulation. Therefore, the di usion only reduces the mechanical strength of the insulation, but the ultimate breakdown can potentially be caused by a sudden drop in the pressure of the process gas.
Resumo:
In the electrical industry the 50 Hz electric and magnetic fields are often higher than in the average working environment. The electric and magnetic fields can be studied by measuring or by calculatingthe fields in the environment. For example, the electric field under a 400 kV power line is 1 to 10 kV/m, and the magnetic flux density is 1 to 15 µT. Electricand magnetic fields of a power line induce a weak electric field and electric currents in the exposed body. The average current density in a human being standing under a 400 kV line is 1 to 2 mA/m2. The aim of this study is to find out thepossible effects of short term exposure to electric and magnetic fields of electricity power transmission on workers' health, in particular the cardiovascular effects. The study consists of two parts; Experiment I: influence on extrasystoles, and Experiment II: influence on heart rate. In Experiment I two groups, 26 voluntary men (Group 1) and 27 transmission-line workers (Group 2), were measured. Their electrocardiogram (ECG) was recorded with an ambulatory recorder both in and outside the field. In Group 1 the fields were 1.7 to 4.9 kV/m and 1.1 to 7.1 pT; in Group 2 they were 0.1 to 10.2 kV/m and 1.0 to 15.4 pT. In the ECG analysis the only significant observation was a decrease in the heart rate after field exposure (Group 1). The drop cannot be explained with the first measuring method. Therefore Experiment II was carried out. In Experiment II two groups were used; Group 1 (26 male volunteers) were measured in real field exposure, Group 2 (15 male volunteers) in "sham" fields. The subjects of Group 1 spent 1 h outside the field, then 1 h in the field under a 400 kV transmission line, and then again 1 h outside the field. Under the 400 kV linethe field strength varied from 3.5 to 4.3 kV/m, and from 1.4 to 6.6 pT. Group 2spent the entire test period (3 h) in a 33 kV outdoor testing station in a "sham" field. ECG, blood pressure, and electroencephalogram (EEG) were measured by ambulatory methods. Before and after the field exposure, the subjects performed some cardiovascular autonomic function tests. The analysis of the results (Experiments I and II) showed that extrasystoles or arrythmias were as frequent in the field (below 4 kV/m and 4 pT) as outside it. In Experiment II there was no decrease detected in the heart rate, and the systolic and diastolic blood pressure stayed nearly the same. No health effects were found in this study.
Resumo:
Työn tarkoituksena on tarkastella ERP-järjestelmän käyttöönottoa ja tarjota ohjekartta kuinka tehdä se menestyksekkäästi. Lisäksi työ kartoittaa Konecranesin saamia etuja ja hyötyjä yrityksen ottaessa ERP-järjestelmä käyttöön. Käyttöönottoprojekti, sen vaiheet ja muut merkittävät ERP-projekteihin liittyvät vaiheet on kuvattu työssä yksityiskohtaisesti. Ensiksi ERP-järjestelmän käyttöönottoa tarkastellaan kirjallisuuteen perustuen. Myöhemmin sitä tarkastellaan perustuen kirjoittajan kokemuksiin ja havaintoihin ERP-järjestelmän käyttöönotosta, ja vertaillaan käytännön suhdetta teoriaan.ERP-järjestemät ovat kalliita ja niiden käyttöön ottaminen on aikaa vievää. Viimeisen vuosikymmenen aikana yritykset ovat enenevissä määrin alkaneet ottamaan ERP-järjestelmiä käyttöön. ERP-järjestelmät ovat saavuttaneet kasvavaa suosiota mm. niiden operaatioita integroivan ja tehostavan luonteesta ansiosta sekä niiden kyvystä tarjota päivitettyä tietoa reaaliajassa.Myös menestyksekkäissä ERP-projekteissa on parantamisen varaa. Mitattaessa ERP- projektin menestyksellisyyttä pitäisi käyttää sekä määrällisiä että laadullisia mittareita. On helppoa käyttää ainoastaan määrällisiä mittareita. Usein kuitenkin laadulliset asiat ovat tärkeämpiä. Ihmiset on saatava sitoutumaan yhteiseen tavoitteeseen kommunikaation avulla. Huonoja ensivaikutelmia on vaikea muuttaa. Vaikka vaikuttaisikin siltä, että ERP-projekti on onnistunut, kun kaikki näyttää hyvältä ”paperilla”, loppujen lopuksi systeemiä käyttävät ihmiset päättävät projektin menestyksellisyydestä. Järjestelmän käyttöönottohetkeä on pidettävä ERP-projektin ensimmäisenä vaiheena.
Resumo:
Sääntelyn poistuminen ja kilpailun vapautuminen ovat aiheuttaneet huomattavan toimintaympäristön muuttumisen pohjoismaisilla sähkömarkkinoilla. Pohjoismaiset sähkömarkkinalakiuudistukset poistivat muodolliset kilpailun esteet mahdollistaen näin sekä kansallisten että yhteispohjoismaisten aktiivisten sähkömarkkinoiden muodostumisen. Toimintaympäristön muuttuminen säännellystä sähkönhuoltojärjestelmästä vapaan kilpailun sähkömarkkinoiksi altisti markkinaosapuolet riskeille, joita toimialalla ei aiemmin ollut esiintynyt. Aiemmin toiminnassa esiintyneet riskit kyettiin siirtämään suoraan asiakkaille, mikä kilpaillussa toimintaympäristössä ei enää ollut samalla tavalla mahdollista. Näin uudessa toimintaympäristössä menestymisen edellytykseksi tuli aktiivinen sähkömarkkinoiden seuraaminen ja organisoitu sähkökaupan riskienhallinta. Tässä tutkimuksessa tarkastellaan riskienhallinnan toteuttamista vastapainetuotantoa omaavassa sähköyhtiössä. Liiketoimintaa uhkaavaa markkinariskiä on käsitelty tutkimuksessa taloudellisena riskinä, joka aiheutuu mistä tahansa arvon muutoksesta sähkömarkkinoilla. Sähkömarkkinoilla suurimpana riskitekijänä voidaan pitää epävarmuutta tulevasta sähkön hintatasosta, joka toteutuu aina joko lyhytaikaisena kassavirtariskinä tai pitkäaikaisena kustannusriskinä. Tuotantoa omaavan sähköyhtiön sähkökaupankäyntiin liittyy sekä toimintaa sitovia velvoitteita että myös joustavan toiminnan mahdollisuuksia. Toimitusvelvollista sähkön hankintaa ja myyntiä omaavan yhtiön sähkökaupankäyntiä rajoittaa toimitusten kiinteähintaisuus sähkön markkinahinnan vaihdellessa huomattavasti. Toisaalta kiinteähintainen sähköntoimitus toimii hintasuojauksena sähkön markkinahintaa vastaan. Tällöin keskeinen epävarmuus sähkökaupankäynnissä liittyy lyhyen ja pitkän aikavälin hintasuojauksen tasapainottamiseen ja siten hinnan vaihtelulle alttiina olevan avoimen position sulkemistason ja ajankohdan valintaan. Riskienhallinnassa ei tule pyrkiä riskien eliminoimiseen, vaan niiden sopeuttamiseen liiketoiminnan tuotto-odotuksen ja riskinsietokyvyn mukaisiksi. Keskeinen riskienhallinnan vaatimus on riskeille altistumisen mallintaminen. Tutkimuksessa määritettiin tuotantopainotteisen energiayhtiön sähkökaupankäynnille riskienhallinnan toimintaohjeistus ja ehdotettiin toimintamallia sen toteuttamiseksi. Riskipolitiikan keskeisin sisältö on määrittää riskinsietokyky liiketoiminnalle sekä toimintavaltuudet ja -puitteet operatiiviselle riskienhallinnalle. Toimintavaltuuksien ja -limiittien tehtävänä on estää riskinsietokyvyn ylittyminen. Toiminnan seuraamisen selkeyden ja läpinäkyvyyden varmistamiseksi toimintamallissa tulee olla määritettynä vastuiden eriyttäminen sekä yksityiskohtainen valvonnan ja raportoinnin toteuttaminen. Kasvanut riskienhallinnan tarve on lisännyt myös palvelun tarjontaa sähkömarkkinoilla. Sähkömarkkinoille on tullut uusia riskienhallintaan erikoistuneita toimijoita, jotka tarjoavat osaamiseensa liittyviä palveluita sekä mallintamis- ja hallintatyökaluja. Tällöin keskeinen sähkökaupan riskienhallinnan toimintamallin toteuttamiseen liittyvä valinta tulee tehdä ulkoistamisen ja omatoimisuuden välillä yrityksen omiin voimavaroihin ja strategisiin tavoitteisiin perustuen. Pieni resurssisessa ja liikevaihtoisessa sähkökaupankäynnissä ei tule pyrkiä kaikkien riskienhallinnan osaamisalueiden omatoimiseen hallintaan. Tutkimuksen perusteella tuotantopainotteisen sähköyhtiön kaupankäyntivolyymin ollessa pieni ja liiketoimintastrategian keskittyessä suojaustoimintaan pääosa sen riskienhallinnan osa-alueista kyetään hoitamaan kustannustehokkaasti ja luotettavasti omatoimisesti. Riskienhallinnan kehittäminen on jatkuva prosessi ja siten riskienhallinnan toimintamalli tulee mukauttaa niin liiketoimintastrategian kuin myös liiketoimintaympäristön muutoksiin.
Resumo:
Third Generation Partnership Project (3GPP) on organisaatio, joka määrittelee ja ylläpitää kolmannen sukupolven matkapuhelinverkon standardeja. Organisaatio luotiin monien eri standardointielinten toimesta havaittaessa, ettei maailmanlaajuista kolmannen sukupolven matkapuhelinteknologiaa voitaisi määritellä ilman laajaa yhteistyötä. 3GPP:ssä standardointityö on jakautunut usealle tekniselle määrittelyryhmälle. Jokaisen ryhmän tehtävänä on kehittää määrittelyjä ja raportteja omalla vastuualueellaan. 3GPP:ssä määrittelytyötä tehdään samanaikaisesti teknillisten määrittelyryhmien välillä. Tämä vaatii tiukkoja sääntöjä määrittelyjen luonti-, hyväksyntä- ja ylläpitotehtäviin. Vain siten on mahdollista hallita määrittelyihin tulevia muutoksia ja tarvittavaa kokonaistyömäärää. Tämä diplomityö kuvaa 3GPP:n määrittelemän UMTS-teknologian. Työssä keskitytään tarkemmin 3GPP-organisaation rakenteeseen, määritysten tekemiseen ja työskentelytapoihin. Tämä diplomityö osoittaa millainen organisaatio ja säännöt vaaditaan maailmanlaajuisen matkapuhelinjärjestelmän kehittämiseen.
Resumo:
In this study the performance measurement, a part of the research and development of the RNC, was improved by implementing counter testing to the Nokia Automation System. The automation of counter testing is a feature the customer ordered, because performing counter testing manually is rather complex. The objective was to implement an automated counter testing system, which once configured correctly, would manage to run the testing and perform the analysis. The requirements for the counter testing were first studied. It was investigated if the auto-mation of the feature was feasible in the meetings with the customer. The basic functionality required for the automation was also drawn. The technologies used in the architecture of the Nokia Automation System were studied. Based on the results of the study, a new technology, wxWidgets, was introduced. The new technology was necessary to facilitate the implementing of the required feature. Finally the implementation of the counter testing was defined and implemented. The result of this study was the automation of the counter testing method developed as a new feature for the Nokia Automation System. The feature meets the specifications and requirements set by the customer. The performing of the counter testing feature is totally automated. Only configuration of the test cases is done by the user. The customer has presented new requests to further develop the feature and there are plans by the Nokia Automation System developers to implement those in the near future. The study describes the implementation of the counter testing feature introduced. The results of the study give guidelines for further developing the feature.
Resumo:
Internationalization and the following rapid growth have created the need to concentrate the IT systems of many small-to-medium-sized production companies. Enterprise Resource Planning systems are a common solution for such companies. Deployment of these ERP systems consists of many steps, one of which is the implementation of the same shared system at all international subsidiaries. This is also one of the most important steps in the internationalization strategy of the company from the IT point of view. The mechanical process of creating the required connections for the off-shore sites is the easiest and most well-documented step along the way, but the actual value of the system, once operational, is perceived in its operational reliability. The operational reliability of an ERP system is a combination of many factors. These factors vary from hardware- and connectivity-related issues to administrative tasks and communication between decentralized administrative units and sites. To accurately analyze the operational reliability of such system, one must take into consideration the full functionality of the system. This includes not only the mechanical and systematic processes but also the users and their administration. All operational reliability in an international environment relies heavily on hardware and telecommunication adequacy so it is imperative to have resources dimensioned with regard to planned usage. Still with poorly maintained communication/administration schemes no amount of bandwidth or memory will be enough to maintain a productive level of reliability. This thesis work analyzes the implementation of a shared ERP system to an international subsidiary of a Finnish production company. The system is Microsoft Dynamics Ax, currently being introduced to a Slovakian facility, a subsidiary of Peikko Finland Oy. The primary task is to create a feasible base of analysis against which the operational reliability of the system can be evaluated precisely. With a solid analysis the aim is to give recommendations on how future implementations are to be managed.
Resumo:
The main target of the study was to examine how Fortum’s tax reporting system could be developed in a way that it collects required information which is also easily transferable to the financial statements. This included examining disclosure requirements for income taxes under IFRS and US GAAP. By benchmarking some Finnish, European and US companies the purpose was to get perspective in what extend they present their tax information in their financial statements. Also material weakness, its existence, was under examination. The research method was qualitative, descriptive and normative. The research material included articles and literature of the tax reporting and standards relating to it. The interviews made had a notable significance. The study pointed out that Fortum’s tax reporting is in good shape and it does not require big changes. The biggest renewal of the tax reporting system is that there is only one model for all Fortum’s companies. It is also more automated, quicker, and more efficient and it reminds more the notes in its shape. In addition it has more internal controls to improve quality and efficiency of the reporting process.
Resumo:
Dreaming is a pure form of phenomenality, created by the brain untouched by external stimulation or behavioral activity, yet including a full range of phenomenal contents. Thus, it has been suggested that the dreaming brain could be used as a model system in a biological research program on consciousness (Revonsuo, 2006). In the present thesis, the philosophical view of biological realism is accepted, and thus, dreaming is considered as a natural biological phenomenon, explainable in naturalistic terms. The major theoretical contribution of the present thesis is that it explores dreaming from a multidisciplinary perspective, integrating information from various fields of science, such as dream research, consciousness research, evolutionary psychology, and cognitive neuroscience. Further, it places dreaming into a multilevel framework, and investigates the constitutive, etiological, and contextual explanations for dreaming. Currently, the only theory offering a full multilevel explanation for dreaming, that is, a theory including constitutive, etiological, and contextual level explanations, is the Threat Simulation Theory (TST) (Revonsuo, 2000a; 2000b). The empirical significance of the present thesis lies in the tests conducted to test this specific theory put forth to explain the form, content, and biological function of dreaming. The first step in the empirical testing of the TST was to define exact criteria for what is a ‘threatening event’ in dreams, and then to develop a detailed and reliable content analysis scale with which it is possible to empirically explore and quantify threatening events in dreams. The second step was to seek answers to the following questions derived from the TST: How frequent threatening events are in dreams? What kind of qualities these events have? How threatening events in dreams relate to the most recently encoded or the most salient memory traces of threatening events experienced in waking life? What are the effects of exposure to severe waking life threat on dreams? The results reveal that threatening events are relatively frequent in dreams, and that the simulated threats are realistic. The most common threats include aggression, are targeted mainly against the dream self, and include simulations of relevant and appropriate defensive actions. Further, real threat experiences activate the threat simulation system in a unique manner, and dream content is modulated by the activation of long term episodic memory traces with highest negative saliency. To sum up, most of the predictions of the TST tested in this thesis received considerable support. The TST presents a strong argument that explains the specific design of dreams as threat simulations. The TST also offers a plausible explanation for why dreaming would have been selected for: because dreaming interacted with the environment in such a way that enhanced fitness of ancestral humans. By referring to a single threat simulation mechanism it furthermore manages to explain a wide variety of dream content data that already exists in the literature, and to predict the overall statistical patterns of threat content in different samples of dreams. The TST and the empirical tests conducted to test the theory are a prime example of what a multidisciplinary approach to mental phenomena can accomplish. Thus far, dreaming seems to have always resided in the periphery of science, never regarded worth to be studied by the mainstream. Nevertheless, when brought to the spotlight, the study of dreaming can greatly benefit from ideas in diverse branches of science. Vice versa, knowledge learned from the study of dreaming can be applied in various disciplines. The main contribution of the present thesis lies in putting dreaming back where it belongs, that is, into the spotlight in the cross-road of various disciplines.
Resumo:
The large hadron collider constructed at the European organization for nuclear research, CERN, is the world’s largest single measuring instrument ever built, and also currently the most powerful particle accelerator that exists. The large hadron collider includes six different experiment stations, one of which is called the compact muon solenoid, or the CMS. The main purpose of the CMS is to track and study residue particles from proton-proton collisions. The primary detectors utilized in the CMS are resistive plate chambers (RPCs). To obtain data from these detectors, a link system has been designed. The main idea of the link system is to receive data from the detector front-end electronics in parallel form, and to transmit it onwards in serial form, via an optical fiber. The system is mostly ready and in place. However, a problem has occurred with innermost RPC detectors, located in sector labeled RE1/1; transmission lines for parallel data suffer from signal integrity issues over long distances. As a solution to this, a new version of the link system has been devised, a one that fits in smaller space and can be located within the CMS, closer to the detectors. This RE1/1 link system has been so far completed only partially, with just the mechanical design and casing being done. In this thesis, link system electronics for RE1/1 sector has been designed, by modifying the existing link system concept to better meet the requirements of the RE1/1 sector. In addition to completion of the prototype of the RE1/1 link system electronics, some testing for the system has also been done, to ensure functionality of the design.
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.
Resumo:
This master’s thesis is focused on the active magnetic bearings control, specifically the robust control. As carrying out of such kind of control used mixed H2/Hinf controller. So the goal of this work is to design it using Robust Control Toolbox™ in MATLAB and compare it performance and robustness with Hinf robust controller characteristics. But only one degree-of-freedom controller considered.
Resumo:
Background: Measurement of serum cotinine, a major metabolite of nicotine, provides a valid marker for quantifying exposure to tobacco smoke. Exposure to tobacco smoke causes vascular damage by multiple mechanisms, and it has been acknowledged as a risk factor for atherosclerosis. Multifactorial atherosclerosis begins in childhood, but the relationship between exposure to tobacco smoke and arterial changes related to early atherosclerosis have not been studied in children. Aims: The aim of the present study was to evaluate exposure to tobacco smoke with a biomarker, serum cotinine concentration, and its associations with markers of subclinical atherosclerosis and lipid profile in school-aged children and adolescents. Subjects and Methods: Serum cotinine concentration was measured using a gas chromatographic method annually between the ages 8 and 13 years in 538-625 children participating since infancy in a randomized, prospective atherosclerosis prevention trial STRIP (Special Turku coronary Risk factor Intervention Project). Conventional atherosclerosis risk factors were measured repeatedly. Vascular ultrasound studies were performed among 402 healthy 11-year-old children and among 494 adolescents aged 13 years. Results: According to serum cotinine measurements, a notable number of the school aged children and adolescents were exposed to tobacco smoke, but the exposure levels were only moderate. Exposure to tobacco smoke was associated with decreased endothelial function as measured with flow-mediated dilation of the brachial artery, decreased elasticity of the aorta, and increased carotid and aortic intima-media thickness. Longitudinal exposure to tobacco smoke was also related with increased apolipoprotein B and triglyceride levels in 13-year-old adolescents, whose body mass index and nutrient intakes did not differ. Conclusions: These findings suggest that exposure to tobacco smoke in childhood may play a significant role in the development of early atherosclerosis. Key Words: arterial elasticity, atherosclerosis, children, cotinine, endothelial function, environmental tobacco smoke, intima-media thickness, risk factors, ultrasound