932 resultados para efficient algorithm
Resumo:
Tämän diplomityön tarkoituksena oli selvittää kustannustehokkaita keinoja uuteaineiden vähentämiseksi koivusulfaattimassasta. Uuteaineet voivat aiheuttaa ongelmia muodostaessaan saostumia prosessilaitteisiin. Saostumat aiheuttavat tukkeumia ja mittaushäiriöitä, mutta irrotessaan ne myös huonontavat sellun laatua. Lopputuotteeseen joutuessaan ne voivat lisäksi aiheuttaa haju- ja makuhaittoja, joilla on erityistä merkitystä esimerkiksi valmistettaessa elintarvikekartonkeja. Tämä työ tehtiin Stora Enson sellutehtaalla, Enocell Oy:llä, Uimaharjussa. Teoriaosassa käsiteltiin uuteaineiden koostumusta ja niiden aiheuttamia ongelmia sellu– ja paperitehtaissa. Lisäksi koottiin aikaisempien tehdaskokeiden fysikaalisia ja kemiallisia keinoja vähentää koivu-uutetta. Tarkastelualueina olivat puunkäsittely, keitto, pesemö ja valkaisu. Kokeellisessa osassa suoritettiin esikokeita laboratorio- ja tehdasmittakaavassa, jotta saavutettaisiin käytännöllistä tietoa itse lopuksi tehtävää tehdasmittakaavan koetta varten. Laboratoriokokeissa tutkittiin mm. keiton kappaluvun, lisäaineiden ja hartsisaippuan vaikutusta koivu-uutteeseen. Lisäksi suoritettiin myös happo- (A) ja peretikkahappovaiheen (Paa) laboratoriokokeet. Tehdasmittakaavassa tarkasteltiin mm. keiton kappaluvun, pesemön lämpötilan, A-vaiheen, valkaisun peroksidi- ja Paa-vaiheen vaikutusta koivu-uutteeseen. Uutteenpoistotehokkuutta eri menetelmien välillä vertailtiin niin määrällisesti kuin rahallisesti. Uutteenpoistotehokkuudella mitattuna vertailuvaihe oli tehokkain pesemön loppuvaiheessa ja valkaisun alkuvaiheessa. Pesemön loppuvaiheessa uutteenpoistoreduktiot olivat noin 30 % ja valkaisun alkuvaiheessa 40 %. Peroksidivaihe oli tehokkain käytettynä valkaisun loppuvaiheessa noin 40 % reduktiolla. Kustannustehokkuudella mitattuna tehokkaimmaksi osoittautui A-vaihe yhdessä peroksidivaiheen kanssa. Säästöt vertailujaksoon verrattuna olivat noin 0.3 €/ADt. Lisäksi kyseinen yhdistelmä osoittautui hyväksi keinoksi säilyttää uutetaso alle maksimirajan kuitulinja 2:lla, kun kuitulinjalla 1 tuotettiin samanaikaisesti armeeraussellua.
Resumo:
Puhelinmuistio on yksi matkapuhelimen käytetyimmistä ominaisuuksista. Puhelinmuistion tulee siksi olla kaikissa tilanteissa mahdollisimman nopeasti käytettävissä. Tämä edellyttää puhelinmuistiopalvelimelta tehokkaita tietorakenteita ja lajittelualgoritmeja. Nokian matkapuhelimissa puhelinmuistiopalvelin käyttää hakurakenteena järjestettyjä taulukoita. Työn tavoitteena oli kehittää puhelinmuistiopalvelimen hakutaulukoiden lajittelu mahdollisimman nopeaksi. Useita eri lajittelualgoritmeja vertailtiin ja niiden suoritusaikoja analysoitiin eri tilanteissa. Insertionsort-lajittelualgoritmin todettiin olevan nopein algoritmi lähes järjestyksessä olevien taulukoiden lajitteluun. Analyysin perusteella Quicksort-algoritmi lajittelee nopeimmin satunnaisessa järjestyksessä olevat taulukot. Quicksort-insertionsort –hybridialgoritmin havaittiin olevan paras lajittelualgoritmi puhelinmuistion lajitteluun. Sopivalla parametroinnilla tämä algoritmi on nopea satunnaisessa järjestyksessä olevalle aineistolle. Se kykenee hyödyntämään lajiteltavassa aineistossa valmiina olevaa järjestystä. Algoritmi ei kasvata merkittävästi muistinkulutusta. Uuden algoritmin ansiosta hakutaulukoiden lajittelu nopeutuu parhaimmillaan useita kymmeniä prosentteja.
Resumo:
(S)-2-(4-Bromo-2,4"-bithiazole)-1-(tert-butoxycarbonyl)pyrrolidine ((S)-1) was obtained as a single enantiomer and in high yield by means of a two-step modified Hantzsch thiazole synthesis reaction when bromoketone 3 and thioamide (S)-4 were used. Further conversion of (S)-1 into trimethyltin derivative (S)-2 broadens the scope for further cross-coupling reactions.
Resumo:
A Wiener system is a linear time-invariant filter, followed by an invertible nonlinear distortion. Assuming that the input signal is an independent and identically distributed (iid) sequence, we propose an algorithm for estimating the input signal only by observing the output of the Wiener system. The algorithm is based on minimizing the mutual information of the output samples, by means of a steepest descent gradient approach.
Resumo:
This paper proposes a very simple method for increasing the algorithm speed for separating sources from PNL mixtures or invertingWiener systems. The method is based on a pertinent initialization of the inverse system, whose computational cost is very low. The nonlinear part is roughly approximated by pushing the observations to be Gaussian; this method provides a surprisingly good approximation even when the basic assumption is not fully satisfied. The linear part is initialized so that outputs are decorrelated. Experiments shows the impressive speed improvement.
Resumo:
Yritysidentiteetistä on monta eri näkemystä ja yleisesti hyväksyttyä määritelmää ei ole olemassa. Monia eri näkemyksiä käsitellään tässä tutkielmassa. Vaikka yritysidentiteettiä ei olekaan helppo mitata, on tätä varten kuitenkin kehitetty useampia metodeja. Identiteetin viestintä vaati strategisia päätöksiä ennen kuin viestintää voidaan tehdä. Viestinnän integrointi on avainasemassa identiteetin viestinnässä. Hyvin hoidettu ja kommunikoitu yritysidentiteetti voi johtaa useisiin hyötyihin organisaatiolle. Kuitenkaan nämä hyödyt eivät näy kovin nopeasti, koska yritysidentiteetin viestintä on pitkän tähtäimen prosessi.
Resumo:
This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.
Resumo:
INTRODUCTION: The decline of malaria and scale-up of rapid diagnostic tests calls for a revision of IMCI. A new algorithm (ALMANACH) running on mobile technology was developed based on the latest evidence. The objective was to ensure that ALMANACH was safe, while keeping a low rate of antibiotic prescription. METHODS: Consecutive children aged 2-59 months with acute illness were managed using ALMANACH (2 intervention facilities), or standard practice (2 control facilities) in Tanzania. Primary outcomes were proportion of children cured at day 7 and who received antibiotics on day 0. RESULTS: 130/842 (15∙4%) in ALMANACH and 241/623 (38∙7%) in control arm were diagnosed with an infection in need for antibiotic, while 3∙8% and 9∙6% had malaria. 815/838 (97∙3%;96∙1-98.4%) were cured at D7 using ALMANACH versus 573/623 (92∙0%;89∙8-94∙1%) using standard practice (p<0∙001). Of 23 children not cured at D7 using ALMANACH, 44% had skin problems, 30% pneumonia, 26% upper respiratory infection and 13% likely viral infection at D0. Secondary hospitalization occurred for one child using ALMANACH and one who eventually died using standard practice. At D0, antibiotics were prescribed to 15∙4% (12∙9-17∙9%) using ALMANACH versus 84∙3% (81∙4-87∙1%) using standard practice (p<0∙001). 2∙3% (1∙3-3.3) versus 3∙2% (1∙8-4∙6%) received an antibiotic secondarily. CONCLUSION: Management of children using ALMANACH improve clinical outcome and reduce antibiotic prescription by 80%. This was achieved through more accurate diagnoses and hence better identification of children in need of antibiotic treatment or not. The building on mobile technology allows easy access and rapid update of the decision chart. TRIAL REGISTRATION: Pan African Clinical Trials Registry PACTR201011000262218.
Resumo:
OBJECTIVE: To review the available knowledge on epidemiology and diagnoses of acute infections in children aged 2 to 59 months in primary care setting and develop an electronic algorithm for the Integrated Management of Childhood Illness to reach optimal clinical outcome and rational use of medicines. METHODS: A structured literature review in Medline, Embase and the Cochrane Database of Systematic Review (CDRS) looked for available estimations of diseases prevalence in outpatients aged 2-59 months, and for available evidence on i) accuracy of clinical predictors, and ii) performance of point-of-care tests for targeted diseases. A new algorithm for the management of childhood illness (ALMANACH) was designed based on evidence retrieved and results of a study on etiologies of fever in Tanzanian children outpatients. FINDINGS: The major changes in ALMANACH compared to IMCI (2008 version) are the following: i) assessment of 10 danger signs, ii) classification of non-severe children into febrile and non-febrile illness, the latter receiving no antibiotics, iii) classification of pneumonia based on a respiratory rate threshold of 50 assessed twice for febrile children 12-59 months; iv) malaria rapid diagnostic test performed for all febrile children. In the absence of identified source of fever at the end of the assessment, v) urine dipstick performed for febrile children <2 years to consider urinary tract infection, vi) classification of 'possible typhoid' for febrile children >2 years with abdominal tenderness; and lastly vii) classification of 'likely viral infection' in case of negative results. CONCLUSION: This smartphone-run algorithm based on new evidence and two point-of-care tests should improve the quality of care of <5 year children and lead to more rational use of antimicrobials.
Resumo:
BACKGROUND: Several guidelines recommend computed tomography scans for populations with high-risk for lung cancer. The number of individuals evaluated for peripheral pulmonary lesions (PPL) will probably increase, and with it non-surgical biopsies. Associating a guidance method with a target confirmation technique has been shown to achieve the highest diagnostic yield, but the utility of bronchoscopy with radial probe endobronchial ultrasound using fluoroscopy as guidance without a guide sheath has not been reported. METHODS: We conducted a retrospective analysis of bronchoscopy with radial probe endobronchial ultrasound using fluoroscopy procedures for the investigation of PPL performed by experienced bronchoscopists with no specific previous training in this particular technique. Operator learning curves and radiological predictors were assessed for all consecutive patients examined during the first year of application of the technique. RESULTS: Fifty-one PPL were investigated. Diagnostic yield and visualization yield were 72.5 and 82.3% respectively. The diagnostic yield was 64.0% for PPL ≤20mm, and 80.8% for PPL>20mm. No false-positive results were recorded. The learning curve of all diagnostic tools showed a DY of 72.7% for the first sub-group of patients, 81.8% for the second, 72.7% for the third, and 81.8% for the last. CONCLUSION: Bronchoscopy with radial probe endobronchial ultrasound using fluoroscopy as guidance is safe and simple to perform, even without specific prior training, and diagnostic yield is high for PPL>and ≤20mm. Based on these findings, this method could be introduced as a first-line procedure for the investigation of PPL, particularly in centers with limited resources.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
Coherent anti-Stokes Raman scattering is the powerful method of laser spectroscopy in which significant successes are achieved. However, the non-linear nature of CARS complicates the analysis of the received spectra. The objective of this Thesis is to develop a new phase retrieval algorithm for CARS. It utilizes the maximum entropy method and the new wavelet approach for spectroscopic background correction of a phase function. The method was developed to be easily automated and used on a large number of spectra of different substances.. The algorithm was successfully tested on experimental data.
Resumo:
Worldwide, about half the adult population is considered overweight as defined by a body mass index (BMI - calculated by body weight divided by height squared) ratio in excess of 25 kg.m-2. Of these individuals, half are clinically obese (with a BMI in excess of 30) and these numbers are still increasing, notably in developing countries such as those of the Middle East region. Obesity is a disorder characterised by increased mass of adipose tissue (excessive fat accumulation) that is the result of a systemic imbalance between food intake and energy expenditure. Although factors such as family history, sedentary lifestyle, urbanisation, income and family diet patterns determine obesity prevalence, the main underlying causes are poor knowledge about food choice and lack of physical activity3. Current obesity treatments include dietary restriction, pharmacological interventions and ultimately, bariatric surgery. The beneficial effects of physical activity on weight loss through increased energy expenditure and appetite modulation are also firmly established. Another viable option to induce a negative energy balance, is to incorporate hypoxia per se or combine it with exercise in an individual's daily schedule. This article will present recent evidence suggesting that combining hypoxic exposure and exercise training might provide a cost-effective strategy for reducing body weight and improving cardio-metabolic health in obese individuals. The efficacy of this approach is further reinforced by epidemiological studies using large-scale databases, which evidence a negative relationship between altitude of habitation and obesity. In the United States, for instance, obesity prevalence is inversely associated with altitude of residence and urbanisation, after adjusting for temperature, diet, physical activity, smoking and demographic factors.