931 resultados para implementation methods
Resumo:
BACKGROUND: Surveillance of multiple congenital anomalies is considered to be more sensitive for the detection of new teratogens than surveillance of all or isolated congenital anomalies. Current literature proposes the manual review of all cases for classification into isolated or multiple congenital anomalies. METHODS: Multiple anomalies were defined as two or more major congenital anomalies, excluding sequences and syndromes. A computer algorithm for classification of major congenital anomaly cases in the EUROCAT database according to International Classification of Diseases (ICD)v10 codes was programmed, further developed, and implemented for 1 year's data (2004) from 25 registries. The group of cases classified with potential multiple congenital anomalies were manually reviewed by three geneticists to reach a final agreement of classification as "multiple congenital anomaly" cases. RESULTS: A total of 17,733 cases with major congenital anomalies were reported giving an overall prevalence of major congenital anomalies at 2.17%. The computer algorithm classified 10.5% of all cases as "potentially multiple congenital anomalies". After manual review of these cases, 7% were agreed to have true multiple congenital anomalies. Furthermore, the algorithm classified 15% of all cases as having chromosomal anomalies, 2% as monogenic syndromes, and 76% as isolated congenital anomalies. The proportion of multiple anomalies varies by congenital anomaly subgroup with up to 35% of cases with bilateral renal agenesis. CONCLUSIONS: The implementation of the EUROCAT computer algorithm is a feasible, efficient, and transparent way to improve classification of congenital anomalies for surveillance and research.
Resumo:
Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. METHODS: This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with (131)I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with (177)Lu-peptides; and case 3, hepatocellular carcinoma treated with (90)Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, D(VD), calculated assuming uniform density, was corrected for density, giving D(VDd). The average 3D-RD absorbed dose values, D(3DRD), were compared with D(VD) and D(VDd), using the relative difference Δ(VD/3DRD). At the voxel level, density-binned Δ(VD/3DRD) and Δ(VDd/3DRD) were plotted against ρ and fitted with a linear regression. RESULTS: The D(VD) calculations showed a good agreement with D(3DRD). Δ(VD/3DRD) was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the Δ(VD/3DRD) range was 0%-14% for cases 1 and 2, and -3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged Δ(VD/3DRD) and density, ρ: case 1 (Δ = -0.56ρ + 0.62, R(2) = 0.93), case 2 (Δ = -0.91ρ + 0.96, R(2) = 0.99), and case 3 (Δ = -0.69ρ + 0.72, R(2) = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (Δ(VDd/3DRD) < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the Δ(VDd/3DRD) range decreased for the 3 clinical cases (case 1, -1% to 4%; case 2, -0.5% to 1.5%, and -1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ - 0.38, R(2) = 0.88) although the slope in case 1 was less pronounced. CONCLUSION: This study shows a small influence of TDH in the abdominal region for 3 representative clinical cases. A simple density-correction method was proposed and improved the comparison in the absorbed dose calculations when using our voxel S value implementation.
Resumo:
New methods and devices for pursuing performance enhancement through altitude training were developed in Scandinavia and the USA in the early 1990s. At present, several forms of hypoxic training and/or altitude exposure exist: traditional 'live high-train high' (LHTH), contemporary 'live high-train low' (LHTL), intermittent hypoxic exposure during rest (IHE) and intermittent hypoxic exposure during continuous session (IHT). Although substantial differences exist between these methods of hypoxic training and/or exposure, all have the same goal: to induce an improvement in athletic performance at sea level. They are also used for preparation for competition at altitude and/or for the acclimatization of mountaineers. The underlying mechanisms behind the effects of hypoxic training are widely debated. Although the popular view is that altitude training may lead to an increase in haematological capacity, this may not be the main, or the only, factor involved in the improvement of performance. Other central (such as ventilatory, haemodynamic or neural adaptation) or peripheral (such as muscle buffering capacity or economy) factors play an important role. LHTL was shown to be an efficient method. The optimal altitude for living high has been defined as being 2200-2500 m to provide an optimal erythropoietic effect and up to 3100 m for non-haematological parameters. The optimal duration at altitude appears to be 4 weeks for inducing accelerated erythropoiesis whereas <3 weeks (i.e. 18 days) are long enough for beneficial changes in economy, muscle buffering capacity, the hypoxic ventilatory response or Na(+)/K(+)-ATPase activity. One critical point is the daily dose of altitude. A natural altitude of 2500 m for 20-22 h/day (in fact, travelling down to the valley only for training) appears sufficient to increase erythropoiesis and improve sea-level performance. 'Longer is better' as regards haematological changes since additional benefits have been shown as hypoxic exposure increases beyond 16 h/day. The minimum daily dose for stimulating erythropoiesis seems to be 12 h/day. For non-haematological changes, the implementation of a much shorter duration of exposure seems possible. Athletes could take advantage of IHT, which seems more beneficial than IHE in performance enhancement. The intensity of hypoxic exercise might play a role on adaptations at the molecular level in skeletal muscle tissue. There is clear evidence that intense exercise at high altitude stimulates to a greater extent muscle adaptations for both aerobic and anaerobic exercises and limits the decrease in power. So although IHT induces no increase in VO(2max) due to the low 'altitude dose', improvement in athletic performance is likely to happen with high-intensity exercise (i.e. above the ventilatory threshold) due to an increase in mitochondrial efficiency and pH/lactate regulation. We propose a new combination of hypoxic method (which we suggest naming Living High-Training Low and High, interspersed; LHTLHi) combining LHTL (five nights at 3000 m and two nights at sea level) with training at sea level except for a few (2.3 per week) IHT sessions of supra-threshold training. This review also provides a rationale on how to combine the different hypoxic methods and suggests advances in both their implementation and their periodization during the yearly training programme of athletes competing in endurance, glycolytic or intermittent sports.
Resumo:
Yksi keskeisimmistä tehtävistä matemaattisten mallien tilastollisessa analyysissä on mallien tuntemattomien parametrien estimointi. Tässä diplomityössä ollaan kiinnostuneita tuntemattomien parametrien jakaumista ja niiden muodostamiseen sopivista numeerisista menetelmistä, etenkin tapauksissa, joissa malli on epälineaarinen parametrien suhteen. Erilaisten numeeristen menetelmien osalta pääpaino on Markovin ketju Monte Carlo -menetelmissä (MCMC). Nämä laskentaintensiiviset menetelmät ovat viime aikoina kasvattaneet suosiotaan lähinnä kasvaneen laskentatehon vuoksi. Sekä Markovin ketjujen että Monte Carlo -simuloinnin teoriaa on esitelty työssä siinä määrin, että menetelmien toimivuus saadaan perusteltua. Viime aikoina kehitetyistä menetelmistä tarkastellaan etenkin adaptiivisia MCMC menetelmiä. Työn lähestymistapa on käytännönläheinen ja erilaisia MCMC -menetelmien toteutukseen liittyviä asioita korostetaan. Työn empiirisessä osuudessa tarkastellaan viiden esimerkkimallin tuntemattomien parametrien jakaumaa käyttäen hyväksi teoriaosassa esitettyjä menetelmiä. Mallit kuvaavat kemiallisia reaktioita ja kuvataan tavallisina differentiaaliyhtälöryhminä. Mallit on kerätty kemisteiltä Lappeenrannan teknillisestä yliopistosta ja Åbo Akademista, Turusta.
Resumo:
Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.
Resumo:
Pulsewidth-modulated (PWM) rectifier technology is increasingly used in industrial applications like variable-speed motor drives, since it offers several desired features such as sinusoidal input currents, controllable power factor, bidirectional power flow and high quality DC output voltage. To achieve these features,however, an effective control system with fast and accurate current and DC voltage responses is required. From various control strategies proposed to meet these control objectives, in most cases the commonly known principle of the synchronous-frame current vector control along with some space-vector PWM scheme have been applied. Recently, however, new control approaches analogous to the well-established direct torque control (DTC) method for electrical machines have also emerged to implement a high-performance PWM rectifier. In this thesis the concepts of classical synchronous-frame current control and DTC-based PWM rectifier control are combined and a new converter-flux-based current control (CFCC) scheme is introduced. To achieve sufficient dynamic performance and to ensure a stable operation, the proposed control system is thoroughly analysed and simple rules for the controller design are suggested. Special attention is paid to the estimationof the converter flux, which is the key element of converter-flux-based control. Discrete-time implementation is also discussed. Line-voltage-sensorless reactive reactive power control methods for the L- and LCL-type line filters are presented. For the L-filter an open-loop control law for the d-axis current referenceis proposed. In the case of the LCL-filter the combined open-loop control and feedback control is proposed. The influence of the erroneous filter parameter estimates on the accuracy of the developed control schemes is also discussed. A newzero vector selection rule for suppressing the zero-sequence current in parallel-connected PWM rectifiers is proposed. With this method a truly standalone and independent control of the converter units is allowed and traditional transformer isolation and synchronised-control-based solutions are avoided. The implementation requires only one additional current sensor. The proposed schemes are evaluated by the simulations and laboratory experiments. A satisfactory performance and good agreement between the theory and practice are demonstrated.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
The need for high performance, high precision, and energy saving in rotating machinery demands an alternative solution to traditional bearings. Because of the contactless operation principle, the rotating machines employing active magnetic bearings (AMBs) provide many advantages over the traditional ones. The advantages such as contamination-free operation, low maintenance costs, high rotational speeds, low parasitic losses, programmable stiffness and damping, and vibration insulation come at expense of high cost, and complex technical solution. All these properties make the use of AMBs appropriate primarily for specific and highly demanding applications. High performance and high precision control requires model-based control methods and accurate models of the flexible rotor. In turn, complex models lead to high-order controllers and feature considerable computational burden. Fortunately, in the last few years the advancements in signal processing devices provide new perspective on the real-time control of AMBs. The design and the real-time digital implementation of the high-order LQ controllers, which focus on fast execution times, are the subjects of this work. In particular, the control design and implementation in the field programmable gate array (FPGA) circuits are investigated. The optimal design is guided by the physical constraints of the system for selecting the optimal weighting matrices. The plant model is complemented by augmenting appropriate disturbance models. The compensation of the force-field nonlinearities is proposed for decreasing the uncertainty of the actuator. A disturbance-observer-based unbalance compensation for canceling the magnetic force vibrations or vibrations in the measured positions is presented. The theoretical studies are verified by the practical experiments utilizing a custom-built laboratory test rig. The test rig uses a prototyping control platform developed in the scope of this work. To sum up, the work makes a step in the direction of an embedded single-chip FPGA-based controller of AMBs.
Resumo:
This thesis gives an overview of the use of the level set methods in the field of image science. The similar fast marching method is discussed for comparison, also the narrow band and the particle level set methods are introduced. The level set method is a numerical scheme for representing, deforming and recovering structures in an arbitrary dimensions. It approximates and tracks the moving interfaces, dynamic curves and surfaces. The level set method does not define how and why some boundary is advancing the way it is but simply represents and tracks the boundary. The principal idea of the level set method is to represent the N dimensional boundary in the N+l dimensions. This gives the generality to represent even the complex boundaries. The level set methods can be powerful tools to represent dynamic boundaries, but they can require lot of computing power. Specially the basic level set method have considerable computational burden. This burden can be alleviated with more sophisticated versions of the level set algorithm like the narrow band level set method or with the programmable hardware implementation. Also the parallel approach can be used in suitable applications. It is concluded that these methods can be used in a quite broad range of image applications, like computer vision and graphics, scientific visualization and also to solve problems in computational physics. Level set methods and methods derived and inspired by it will be in the front line of image processing also in the future.
Resumo:
Background: Diverse projects and guidelines to assist hospitals towards the attainment of comprehensive smoke-free policies have been developed. In 2006, Spain government passed a new smoking ban that reinforce tobacco control policies and banned completely smoking in hospitals. This study assesses the progression of tobacco control policies in the Catalan Network of Smokefree Hospitals before and after a comprehensive national smoking ban. Methods: We used the Self-Audit Questionnaire of the European Network for Smoke-free Hospitals to score the compliance of 9 policy standards (global score = 102). We used two crosssectional surveys to evaluate tobacco control policies before (2005) and after the implementation of a national smoking ban (2007) in 32 hospitals of Catalonia, Spain. We compared the means of the overall score in 2005 and 2007 according to the type of hospital, the number of beds, the prevalence of tobacco consumption, and the number of years as a smoke-free hospital. Results: The mean of the implementation score of tobacco control policies was 52.4 (95% CI:45.4-59.5) in 2005 and 71.6 (95% CI: 67.0-76.2) in 2007 with an increase of 36.7% (p < 0.01). The hospitals with greater improvement were general hospitals (48% increase; p < 0.01), hospitals with > 300 beds (41.1% increase; p < 0.01), hospitals with employees' tobacco consumption prevalence 35-39% (72.2% increase; p < 0.05) and hospitals that had recently implemented smoke-free policies (74.2% increase; p < 0.01). Conclusion: The national smoking ban appears to increase tobacco control activities in hospitals combined with other non-bylaw initiatives such as the Smoke-free Hospital Network.
Resumo:
Background: Diverse projects and guidelines to assist hospitals towards the attainment of comprehensive smoke-free policies have been developed. In 2006, Spain government passed a new smoking ban that reinforce tobacco control policies and banned completely smoking in hospitals. This study assesses the progression of tobacco control policies in the Catalan Network of Smokefree Hospitals before and after a comprehensive national smoking ban. Methods: We used the Self-Audit Questionnaire of the European Network for Smoke-free Hospitals to score the compliance of 9 policy standards (global score = 102). We used two crosssectional surveys to evaluate tobacco control policies before (2005) and after the implementation of a national smoking ban (2007) in 32 hospitals of Catalonia, Spain. We compared the means of the overall score in 2005 and 2007 according to the type of hospital, the number of beds, the prevalence of tobacco consumption, and the number of years as a smoke-free hospital. Results: The mean of the implementation score of tobacco control policies was 52.4 (95% CI:45.4-59.5) in 2005 and 71.6 (95% CI: 67.0-76.2) in 2007 with an increase of 36.7% (p < 0.01). The hospitals with greater improvement were general hospitals (48% increase; p < 0.01), hospitals with > 300 beds (41.1% increase; p < 0.01), hospitals with employees' tobacco consumption prevalence 35-39% (72.2% increase; p < 0.05) and hospitals that had recently implemented smoke-free policies (74.2% increase; p < 0.01). Conclusion: The national smoking ban appears to increase tobacco control activities in hospitals combined with other non-bylaw initiatives such as the Smoke-free Hospital Network.
Resumo:
The variability observed in drug exposure has a direct impact on the overall response to drug. The largest part of variability between dose and drug response resides in the pharmacokinetic phase, i.e. in the dose-concentration relationship. Among possibilities offered to clinicians, Therapeutic Drug Monitoring (TDM; Monitoring of drug concentration measurements) is one of the useful tool to guide pharmacotherapy. TDM aims at optimizing treatments by individualizing dosage regimens based on blood drug concentration measurement. Bayesian calculations, relying on population pharmacokinetic approach, currently represent the gold standard TDM strategy. However, it requires expertise and computational assistance, thus limiting its large implementation in routine patient care. The overall objective of this thesis was to implement robust tools to provide Bayesian TDM to clinician in modern routine patient care. To that endeavour, aims were (i) to elaborate an efficient and ergonomic computer tool for Bayesian TDM: EzeCHieL (ii) to provide algorithms for drug concentration Bayesian forecasting and software validation, relying on population pharmacokinetics (iii) to address some relevant issues encountered in clinical practice with a focus on neonates and drug adherence. First, the current stage of the existing software was reviewed and allows establishing specifications for the development of EzeCHieL. Then, in close collaboration with software engineers a fully integrated software, EzeCHieL, has been elaborated. EzeCHieL provides population-based predictions and Bayesian forecasting and an easy-to-use interface. It enables to assess the expectedness of an observed concentration in a patient compared to the whole population (via percentiles), to assess the suitability of the predicted concentration relative to the targeted concentration and to provide dosing adjustment. It allows thus a priori and a posteriori Bayesian drug dosing individualization. Implementation of Bayesian methods requires drug disposition characterisation and variability quantification trough population approach. Population pharmacokinetic analyses have been performed and Bayesian estimators have been provided for candidate drugs in population of interest: anti-infectious drugs administered to neonates (gentamicin and imipenem). Developed models were implemented in EzeCHieL and also served as validation tool in comparing EzeCHieL concentration predictions against predictions from the reference software (NONMEM®). Models used need to be adequate and reliable. For instance, extrapolation is not possible from adults or children to neonates. Therefore, this work proposes models for neonates based on the developmental pharmacokinetics concept. Patients' adherence is also an important concern for drug models development and for a successful outcome of the pharmacotherapy. A last study attempts to assess impact of routine patient adherence measurement on models definition and TDM interpretation. In conclusion, our results offer solutions to assist clinicians in interpreting blood drug concentrations and to improve the appropriateness of drug dosing in routine clinical practice.
Resumo:
Työn tavoitteena oli kuvata ja ottaa käyttöön sahauseräkohtaisen kannattavuuden laskentamenetelmä sahalle, sekä tehdä laskentamalli menetelmän tueksi. Sahauksen peruskäsitteiden jälkeen työssä on esitelty sahan tuotantoprosessi. Tuotantoprosessi on kuvattu kirjallisuuden ja asiantuntijoiden haastattelujen perusteella. Seuraavaksi kartoitettiin hyötyjä ja vaikutuksia, mitä laskentamenetelmältä odotetaan.. Kustannuslaskennan teoriaa selvitettiin kirjallisuuslähteitä käyttäen silmälläpitäen juuri tätä kehitettävää laskentamenetelmää. Lisäksi esiteltiin Uimaharjun sahalla käytettävät ja laskentaan liittyvät laskenta- ja tietojärjestelmät.Nykyisin sahalla ei ole minkäänlaista menetelmää sahauseräkohtaisen tuloksen laskemiseksi. Pienillä muutoksilla sahan tietojärjestelmään ja prosessikoneisiin voidaan sahauserä kuljettaa prosessin läpi niin, että jokaisessa prosessin vaiheessa sille saadaan kohdistettua tuotantotietoa. Eri vaiheista saatua tietoa käyttämällä saadaan tarkasti määritettyä tuotteet, joita sahauserä tuotti ja paljonko tuotantoresursseja tuottamiseen kului. Laskentamalliin syötetään tuotantotietoja ja kustannustietoa ja saadaan vastaukseksi sahauserän taloudellinen tulos.Toimenpide ehdotuksena esitetään lisätutkimusta tuotantotietojen automaattisesta keräämisestä manuaalisen työn ja virheiden poistamiseksi. Suhteellisen pienillä panoksilla voidaan jokaiselle sahauserälle kerätä tuotantotiedot täysin automaattisesti. Lisäksi kehittämäni laskentamallin tilalle tulisi hankkia sovellus, joka käyttäisi paremmin hyväksi nykyisiä tietojärjestelmiä ja poistaisi manuaalisen työvaiheen laskennassa.
Resumo:
Jatkuvasti lisääntyvä matkapuhelinten käyttäjien määrä, internetin kehittyminen yleiseksi tiedon ja viihteen lähteeksi on luonut tarpeen palvelulle liikkuvan työaseman liittämiseksi tietokoneverkkoihin. GPRS on uusi teknologia, joka tarjoaa olemassa olevia matka- puhelinverkkoja (esim. NMT ja GSM) nopeamman, tehokkaamman ja taloudellisemman liitynnän pakettidataverkkoihin, kuten internettiin ja intranetteihin. Tämän työn tavoitteena oli toteuttaa GPRS:n paketinohjausyksikön (Packet Control Unit, PCU) testauksessa tarvittavat viestintäajurit työasemaympristöön. Aidot matkapuhelinverkot ovat liian kalliita, eikä niistä saa tarvittavasti lokitulostuksia, jotta niitä voisi käyttää GPRS:n testauksessa ohjelmiston kehityksen alkuvaihessa. Tämän takia PCU-ohjelmiston testaus suoritetaan joustavammassa ja helpommin hallittavassa ympäristössä, joka ei aseta kovia reaaliaikavaatimuksia. Uusi toimintaympäristö ja yhteysmedia vaativat PCU:n ja muiden GPRS-verkon yksiköiden välisistä yhteyksistä huolehtivien ohjelman osien, viestintäajurien uuden toteutuksen. Tämän työn tuloksena syntyivät tarvittavien viestintäajurien työasemaversiot. Työssä tarkastellaan eri tiedonsiirtotapoja ja -protokollia testattavan ohjelmiston vaateiden, toteutetun ajurin ja testauksen kannalta. Työssä esitellään kunkin ajurin toteuttama rajapinta ja toteutuksen aste, eli mitkä toiminnot on toteutettu ja mitä on jätetty pois. Ajureiden rakenne ja toiminta selvitetään siltä osin, kuin se on oleellista ohjelman toiminnan kannalta.
Resumo:
Currently there is a vogue for Agile Software Development methods and many software development organizations have already implemented or they are planning to implement agile methods. Objective of this thesis is to define how agile software development methods are implemented in a small organization. Agile methods covered in this thesis are Scrum and XP. From both methods the key practices are analysed and compared to waterfall method. This thesis also defines implementation strategy and actions how agile methods are implemented in a small organization. In practice organization must prepare well and all needed meters are defined before the implementation starts. In this work three different sample projects are introduced where agile methods were implemented. Experiences from these projects were encouraging although sample set of projects were too small to get trustworthy results.