964 resultados para Simplified and advanced calculation methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Adverse effects of combination antiretroviral therapy (CART) commonly result in treatment modification and poor adherence. METHODS: We investigated predictors of toxicity-related treatment modification during the first year of CART in 1318 antiretroviral-naive human immunodeficiency virus (HIV)-infected individuals from the Swiss HIV Cohort Study who began treatment between January 1, 2005, and June 30, 2008. RESULTS: The total rate of treatment modification was 41.5 (95% confidence interval [CI], 37.6-45.8) per 100 person-years. Of these, switches or discontinuations because of drug toxicity occurred at a rate of 22.4 (95% CI, 19.5-25.6) per 100 person-years. The most frequent toxic effects were gastrointestinal tract intolerance (28.9%), hypersensitivity (18.3%), central nervous system adverse events (17.3%), and hepatic events (11.5%). In the multivariate analysis, combined zidovudine and lamivudine (hazard ratio [HR], 2.71 [95% CI, 1.95-3.83]; P < .001), nevirapine (1.95 [1.01-3.81]; P = .050), comedication for an opportunistic infection (2.24 [1.19-4.21]; P = .01), advanced age (1.21 [1.03-1.40] per 10-year increase; P = .02), female sex (1.68 [1.14-2.48]; P = .009), nonwhite ethnicity (1.71 [1.18-2.47]; P = .005), higher baseline CD4 cell count (1.19 [1.10-1.28] per 100/microL increase; P < .001), and HIV-RNA of more than 5.0 log(10) copies/mL (1.47 [1.10-1.97]; P = .009) were associated with higher rates of treatment modification. Almost 90% of individuals with treatment-limiting toxic effects were switched to a new regimen, and 85% achieved virologic suppression to less than 50 copies/mL at 12 months compared with 87% of those continuing CART (P = .56). CONCLUSIONS: Drug toxicity remains a frequent reason for treatment modification; however, it does not affect treatment success. Close monitoring and management of adverse effects and drug-drug interactions are crucial for the durability of CART.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The improvement of the dynamics of flexible manipulators like log cranes often requires advanced control methods. This thesis discusses the vibration problems in the cranes used in commercial forestry machines. Two control methods, adaptive filtering and semi-active damping, are presented. The adaptive filter uses a part of the lowest natural frequency of the crane as a filtering frequency. The payload estimation algorithm, filtering of control signal and algorithm for calculation of the lowest natural frequency of the crane are presented. The semi-active damping method is basedon pressure feedback. The pressure vibration, scaled with suitable gain, is added to the control signal of the valve of the lift cylinder to suppress vibrations. The adaptive filter cuts off high frequency impulses coming from the operatorand semi-active damping suppresses the crane?s oscillation, which is often caused by some external disturbance. In field tests performed on the crane, a correctly tuned (25 % tuning) adaptive filter reduced pressure vibration by 14-17 % and semi-active damping correspondingly by 21-43%. Applying of these methods require auxiliary transducers, installed in specific points in the crane, and electronically controlled directional control valves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis was made in Naantali plant of Finnfeeds Finland Oy. In this thesis the main study was in reducing, controlling, measuring and processing odour effluents in various methods. Also are considered legislation, marketing issues and environmental requirements of reducing of odour effluents. The literature review introduces odours complications, legislations and various methods of odour removal. There is also a review of volatile organic compounds detection and measuring methods. The experimental section consists TD-GC-MS-measurements and expansive measurements with electronic nose. Electronic nose is a new solution for recognition and measuring industrial odours. In this thesis the electronic nose was adapted into reliable recognition and measuring method. Measurements with electronic nose was made in betaine factory and main targets were odour removal process and other odours from factory. As a result of experimental work with TD-GC-MS-measurements becomes odour compound of 2-and 3- methylbutanal and dimethyldisulfide, which odour is sweet and fug. Extensive study with electronic nose found many developmental subjects. Odour balance measurements of factory and after calculation made adjustment of odour removal process, over all odour effluent to environment will reduce 25 %.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Forensic Anthropology and Bioarchaeology studies depend critically on the accuracy and reliability of age-estimation techniques. In this study we have evaluated two age-estimation methods for adults based on the pubic symphysis (Suchey-Brooks) and the auricular surface (Buckberry-Chamberlain) in a current sample of 139 individuals (67 women and 72 men) from Madrid in order to verify the accuracy of both methods applied to a sample of innominate bones from the central Iberian Peninsula. Based on the overall results of this study, the Buckberry-Chamberlain method seems to be the method that provides better estimates in terms of accuracy (percentage of hits) and absolute difference to the chronological age taking into account the total sample. The percentage of hits and mean absolute difference of the Buckberry-Chamberlain and Suchey-Brooks methods are 97.3% and 11.24 years, and 85.7% and 14.38 years, respectively. However, this apparently greater applicability of the Buckberry-Chamberlain method is mainly due to the broad age ranges provided. Results indicated that Suchey-Brooks method is more appropriate for populations with a majority of young individuals, whereas Buckberry-Chamberlain method is recommended for populations with a higher percentage of individuals in the range 60-70 years. These different age estimation methodologies significantly influence the resulting demographic profile, consequently affecting the biological characteristics reconstruction of the samples in which they are applied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Throughout history indigo was derived from various plants for example Dyer’s Woad (Isatis tinctoria L.) in Europe. In the 19th century were the synthetic dyes developed and nowadays indigo is mainly synthesized from by-products of fossil fuels. Indigo is a so-called vat dye, which means that it needs to be reduced to its water soluble leucoform before dyeing. Nowadays, most of the industrial reduction is performed chemically by sodium dithionite. However, this is considered environmentally unfavourable because of waste waters contaminating degradation products. Therefore there has been interest to find new possibilities to reduce indigo. Possible alternatives for the application of dithionite as the reducing agent are biologically induced reduction and electrochemical reduction. Glucose and other reducing sugars have recently been suggested as possible environmentally friendly alternatives as reducing agents for sulphur dyes and there have also been interest in using glucose to reduce indigo. In spite of the development of several types of processes, very little is known about the mechanism and kinetics associated with the reduction of indigo. This study aims at investigating the reduction and electrochemical analysis methods of indigo and give insight on the reduction mechanism of indigo. Anthraquinone as well as it’s derivative 1,8-dihydroxyanthraquinone were discovered to act as catalysts for the glucose induced reduction of indigo. Anthraquinone introduces a strong catalytic effect which is explained by invoking a molecular “wedge effect” during co-intercalation of Na+ and anthraquinone into the layered indigo crystal. The study includes also research on the extraction of plant-derived indigo from woad and the examination of the effect of this method to the yield and purity of indigo. The purity has been conventionally studied spectrophotometrically and a new hydrodynamic electrode system is introduced in this study. A vibrating probe is used in following electrochemically the leuco-indigo formation with glucose as a reducing agent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The general striving to bring down the number of municipal landfills and to increase the reuse and recycling of waste-derived materials across the EU supports the debates concerning the feasibility and rationality of waste management systems. Substantial decrease in the volume and mass of landfill-disposed waste flows can be achieved by directing suitable waste fractions to energy recovery. Global fossil energy supplies are becoming more and more valuable and expensive energy sources for the mankind, and efforts to save fossil fuels have been made. Waste-derived fuels offer one potential partial solution to two different problems. First, waste that cannot be feasibly re-used or recycled is utilized in the energy conversion process according to EU’s Waste Hierarchy. Second, fossil fuels can be saved for other purposes than energy, mainly as transport fuels. This thesis presents the principles of assessing the most sustainable system solution for an integrated municipal waste management and energy system. The assessment process includes: · formation of a SISMan (Simple Integrated System Management) model of an integrated system including mass, energy and financial flows, and · formation of a MEFLO (Mass, Energy, Financial, Legislational, Other decisionsupport data) decision matrix according to the selected decision criteria, including essential and optional decision criteria. The methods are described and theoretical examples of the utilization of the methods are presented in the thesis. The assessment process involves the selection of different system alternatives (process alternatives for treatment of different waste fractions) and comparison between the alternatives. The first of the two novelty values of the utilization of the presented methods is the perspective selected for the formation of the SISMan model. Normally waste management and energy systems are operated separately according to the targets and principles set for each system. In the thesis the waste management and energy supply systems are considered as one larger integrated system with one primary target of serving the customers, i.e. citizens, as efficiently as possible in the spirit of sustainable development, including the following requirements: · reasonable overall costs, including waste management costs and energy costs; · minimum environmental burdens caused by the integrated waste management and energy system, taking into account the requirement above; and · social acceptance of the selected waste treatment and energy production methods. The integrated waste management and energy system is described by forming a SISMan model including three different flows of the system: energy, mass and financial flows. By defining the three types of flows for an integrated system, the selected factor results needed in the decision-making process of the selection of waste management treatment processes for different waste fractions can be calculated. The model and its results form a transparent description of the integrated system under discussion. The MEFLO decision matrix has been formed from the results of the SISMan model, combined with additional data, including e.g. environmental restrictions and regional aspects. System alternatives which do not meet the requirements set by legislation can be deleted from the comparisons before any closer numerical considerations. The second novelty value of this thesis is the three-level ranking method for combining the factor results of the MEFLO decision matrix. As a result of the MEFLO decision matrix, a transparent ranking of different system alternatives, including selection of treatment processes for different waste fractions, is achieved. SISMan and MEFLO are methods meant to be utilized in municipal decision-making processes concerning waste management and energy supply as simple, transparent and easyto- understand tools. The methods can be utilized in the assessment of existing systems, and particularly in the planning processes of future regional integrated systems. The principles of SISMan and MEFLO can be utilized also in other environments, where synergies of integrating two (or more) systems can be obtained. The SISMan flow model and the MEFLO decision matrix can be formed with or without any applicable commercial or free-of-charge tool/software. SISMan and MEFLO are not bound to any libraries or data-bases including process information, such as different emission data libraries utilized in life cycle assessments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concentrated winding permanent magnet machines and their electromagnetic properties are studied in this doctoral thesis. The thesis includes a number of main tasks related to the application of permanent magnets in concentrated winding open slot machines. Suitable analytical methods are required for the first design calculations of a new machine. Concentrated winding machines differ from conventional integral slot winding machines in such a way that adapted analytical calculation methods are needed. A simple analytical model for calculating the concentrated winding axial flux machines is provided. The next three main design tasks are discussed in more detail in the thesis. The magnetic length of the rotor surface magnet machines is studied, and it is shown that the traditional methods have to be modified also in this respect. An important topic in this study has been to evaluate and minimize the rotor permanent magnet Joule losses by using segmented magnets in the calculations and experiments. Determination of the magnetizing and leakage inductances for a concentrated winding machine and the torque production capability of concentrated winding machines with different pole pair numbers are studied, and the results are compared with the corresponding properties of integral slot winding machines. The thesis introduces a new practical permanent magnet motor type for industrial use. The special features of the machine are based on the option of using concentrated winding open slot constructions of permanent magnet synchronous machines in the normal speed ranges of industrial motors, for instance up to 3000 min-1, without excessive rotor losses. By applying the analytical equations and methods introduced in the thesis, a 37 kW 2400 min-1 12-slot 10-pole axial flux machine with rotor-surfacemounted magnets is designed. The performance of the designed motor is determined by experimental measurements and finite element calculations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The Cancer Fast-track Programme's aim was to reduce the time that elapsed between well-founded suspicion of breast, colorectal and lung cancer and the start of initial treatment in Catalonia (Spain). We sought to analyse its implementation and overall effectiveness. METHODS: A quantitative analysis of the programme was performed using data generated by the hospitals on the basis of seven fast-track monitoring indicators for the period 2006-2009. In addition, we conducted a qualitative study, based on 83 semistructured interviews with primary and specialised health professionals and health administrators, to obtain their perception of the programme's implementation. RESULTS: About half of all new patients with breast, lung or colorectal cancer were diagnosed via the fast track, though the cancer detection rate declined across the period. Mean time from detection of suspected cancer in primary care to start of initial treatment was 32 days for breast, 30 for colorectal and 37 for lung cancer (2009). Professionals associated with the implementation of the programme showed that general practitioners faced with suspicion of cancer had changed their conduct with the aim of preventing lags. Furthermore, hospitals were found to have pursued three specific implementation strategies (top-down, consensus-based and participatory), which made for the cohesion and sustainability of the circuits. CONCLUSION: The programme has contributed to speeding up diagnostic assessment and treatment of patients with suspicion of cancer, and to clarifying the patient pathway between primary and specialised care.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to evaluate the relationships between the spectra in the Vis-NIR range and the soil P concentrations obtained from the PM and Prem extraction methods as well as the effects of these relationships on the construction of models predicting P concentration in Oxisols. Soil samples' spectra and their PM and Prem extraction solutions were determined for the Vis-NIR region between 400 and 2500 nm. Mineralogy and/or organic matter content act as primary attributes allowing correlation of these soil phosphorus fractions with the spectra, mainly at wavelengths between 450-550, 900-1100 nm, near 1400 nm and between 2200-2300 nm. However, the regression models generated were not suitable for quantitative phosphate analysis. Solubilization of organic matter and reactions during the PM extraction process hindered correlations between the spectra and these P soil fractions. For Prem,, the presence of Ca in the extractant and preferential adsorption by gibbsite and iron oxides, particularly goethite, obscured correlations with the spectra.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human genome comprises roughly 20 000 protein coding genes. Proteins are the building material for cells and tissues, and proteins are functional compounds having an important role in many cellular responses, such as cell signalling. In multicellular organisms such as humans, cells need to communicate with each other in order to maintain a normal function of the tissues within the body. This complex signalling between and within cells is transferred by proteins and their post-translational modifications, one of the most important being phosphorylation. The work presented here concerns the development and use of tools for phosphorylation analysis. Mass spectrometers have become essential tools to study proteins and proteomes. In mass spectrometry oriented proteomics, proteins can be identified and their post-translational modifications can be studied. In this Ph.D. thesis the objectives were to improve the robustness of sample handling methods prior to mass spectrometry analysis for peptides and their phosphorylation status. The focus was to develop strategies that enable acquisition of more MS measurements per sample, higher quality MS spectra and simplified and rapid enrichment procedures for phosphopeptides. Furthermore, an objective was to apply these methods to characterize phosphorylation sites of phosphopeptides. In these studies a new MALDI matrix was developed which allowed more homogenous, intense and durable signals to be acquired when compared to traditional CHCA matrix. This new matrix along with other matrices was subsequently used to develop a new method that combines multiple spectra from different matrises from identical peptides. With this approach it was possible to identify more phosphopeptides than with conventional LC/ESI-MS/MS methods, and to use 5 times less sample. Also, phosphopeptide affinity MALDI target was prepared to capture and immobilise phosphopeptides from a standard peptide mixture while maintaining their spatial orientation. In addition a new protocol utilizing commercially available conductive glass slides was developed that enabled fast and sensitive phosphopeptide purification. This protocol was applied to characterize the in vivo phosphorylation of a signalling protein, NFATc1. Evidence for 12 phosphorylation sites were found, and many of those were found in multiply phosphorylated peptides

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this work was to develop and validate simple, accurate and precise spectroscopic methods (multicomponent, dual wavelength and simultaneous equations) for the simultaneous estimation and dissolution testing of ofloxacin and ornidazole tablet dosage forms. The medium of dissolution used was 900 ml of 0.01N HCl, using a paddle apparatus at a stirring rate of 50 rpm. The drug release was evaluated by developed and validated spectroscopic methods. Ofloxacin and ornidazole showed 293.4 and 319.6nm as λmax in 0.01N HCl. The methods were validated to meet requirements for a global regulatory filing. The validation included linearity, precision and accuracy. In addition, recovery studies and dissolution studies of three different tablets were compared and the results obtained show no significant difference among products.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fast changing environment sets pressure on firms to share large amount of information with their customers and suppliers. The terms information integration and information sharing are essential for facilitating a smooth flow of information throughout the supply chain, and the terms are used interchangeably in research literature. By integrating and sharing information, firms want to improve their logistics performance. Firms share information with their suppliers and customers by using traditional communication methods (telephone, fax, Email, written and face-to-face contacts) and by using advanced or modern communication methods such as electronic data interchange (EDI), enterprise resource planning (ERP), web-based procurement systems, electronic trading systems and web portals. Adopting new ways of using IT is one important resource for staying competitive on the rapidly changing market (Saeed et al. 2005, 387), and an information system that provides people the information they need for performing their work, will support company performance (Boddy et al. 2005, 26). The purpose of this research has been to test and understand the relationship between information integration with key suppliers and/or customers and a firm’s logistics performance, especially when information technology (IT) and information systems (IS) are used for integrating information. Quantitative and qualitative research methods have been used to perform the research. Special attention has been paid to the scope, level and direction of information integration (Van Donk & van der Vaart 2005a). In addition, the four elements of integration (Jahre & Fabbe-Costes 2008) are closely tied to the frame of reference. The elements are integration of flows, integration of processes and activities, integration of information technologies and systems and integration of actors. The study found that information integration has a low positive relationship to operational performance and a medium positive relationship to strategic performance. The potential performance improvements found in this study vary from efficiency, delivery and quality improvements (operational) to profit, profitability or customer satisfaction improvements (strategic). The results indicate that although information integration has an impact on a firm’s logistics performance, all performance improvements have not been achieved. This study also found that the use of IT and IS have a mediocre positive relationship to information integration. Almost all case companies agreed on that the use of IT and IS could facilitate information integration and improve their logistics performance. The case companies felt that an implementation of a web portal or a data bank would benefit them - enhance their performance and increase information integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis is to study the role of received advance payments in working capital management by creating a new measurement and to study the relationship between advance payments and profitability. The study has been conducted using narrative literature review and quantitative research methods. The research was made analyzing 108 companies listed in Helsinki Stock Exchange. The results indicate that 68 % of the studied companies are receiving advance payments and the average cycle time for received advance payments is 13 days. A new key figure is created to include received advance payments into the calculation of working capital. Received advance payments shorten the working capital cycle, by 13 days, when they are used in the calculation. The role of advance payments is not as significant as the role of receivables and inventories but advance payments may have a larger role than payables if the company is receiving noticeable amounts of advance payments. There are three branches where companies are receiving more advance payments than average companies. The branches are project business and ICT and publishing sectors. There is a negative correlation between profitability and advance payments based on the results of this study.