980 resultados para Source wavelet estimation
Resumo:
The aim of this master’s thesis is to develop an algorithm to calculate the cable network for heat and power station CHGRES. This algorithm includes important aspect which has an influence on the cable network reliability. Moreover, according to developed algorithm, the optimal solution for modernization cable system from economical and technical point of view was obtained. The conditions of existing cable lines show that replacement is necessary. Otherwise, the fault situation would happen. In this case company would loss not only money but also its prestige. As a solution, XLPE single core cables are more profitable than other types of cable considered in this work. Moreover, it is presented the dependence of value of short circuit current on number of 10/110 kV transformers connected in parallel between main grid and considered 10 kV busbar and how it affects on final decision. Furthermore, the losses of company in power (capacity) market due to fault situation are presented. These losses are commensurable with investment to replace existing cable system.
Resumo:
Tässä pro gradu -tutkielmassa käsittelen lähde- ja kohdetekstikeskeisyyttä näytelmäkääntämisessä. Tutkimuskohteina olivat käännösten sanasto, syntaksi, näyttämötekniikka, kielikuvat, sanaleikit, runomitta ja tyyli. Tutkimuksen tarkoituksena oli selvittää, näkyykö teoreettinen painopisteen siirtyminen lähdetekstikeskeisyydestä kohdetekstikeskeisyyteen suomenkielisessä näytelmäkääntämisessä. Oletuksena oli, että siirtyminen näkyy käytetyissä käännösstrategioissa. Tutkimuksen teoriaosuudessa käsitellään ensin lähde- ja kohdetekstikeskeisiä käännösteorioita. Ensin esitellään kaksi lähdetekstikeskeistä teoriaa, jotka ovat Catfordin (1965) muodollinen vastaavuus ja Nidan (1964) dynaaminen ekvivalenssi. Kohdetekstikeskeisistä teorioista käsitellään Touryn (1980) ja Newmarkin (1981) teoreettisia näkemyksiä sekä Reiss ja Vermeerin (1986) esittelemää skopos-teoriaa. Vieraannuttamisen ja kotouttamisen periaatteet esitellään lyhyesti. Teoriaosuudessa käsitellään myös näytelmäkääntämistä, William Shakespearen kieltä ja siihen liittyviä käännösongelmia. Lisäksi esittelen lyhyesti Shakespearen kääntämistä Suomessa ja Julius Caesarin neljä kääntäjää. Tutkimuksen materiaalina oli neljä Shakespearen Julius Caesar –näytelmän suomennosta, joista Paavo Cajanderin käännös on julkaistu vuonna 1883, Eeva-Liisa Mannerin vuonna 1983, Lauri Siparin vuonna 2006 ja Jarkko Laineen vuonna 2007. Analyysissa käännöksiä verrattiin lähdetekstiin ja toisiinsa ja vertailtiin kääntäjien tekemiä käännösratkaisuja. Tulokset olivat oletuksen mukaisia. Lähdetekstikeskeisiä käännösstrategioita oli käytetty uusissa käännöksissä vähemmän kuin vanhemmissa. Kohdetekstikeskeiset strategiat erosivat huomattavasti toisistaan ja uusinta käännöstä voi sanoa adaptaatioksi. Jatkotutkimuksissa tulisi materiaali laajentaa koskemaan muitakin Shakespearen näytelmien suomennoksia. Eri aikakausien käännöksiä tulisi verrata keskenään ja toisiinsa, jotta voitaisiin luotettavasti kuvata muutosta lähde- ja kohdetekstikeskeisten käännösstrategioiden käytössä ja eri aikakausien tyypillisten strategioiden kartoittamiseksi.
Resumo:
In this work, a new mathematical equation correction approach for overcoming spectral and transport interferences was proposed. The proposal was applied to eliminate spectral interference caused by PO molecules at the 217.0005 nm Pb line, and the transport interference caused by variations in phosphoric acid concentrations. Correction may be necessary at 217.0005 nm to account for the contribution of PO, since Atotal217.0005 nm = A Pb217.0005 nm + A PO217.0005 nm. This may be easily done by measuring other PO wavelengths (e.g. 217.0458 nm) and calculating the relative contribution of PO absorbance (A PO) to the total absorbance (Atotal) at 217.0005 nm: A Pb217.0005 nm = Atotal217.0005 nm - A PO217.0005 nm = Atotal217.0005 nm - k (A PO217.0458 nm). The correction factor k is calculated from slopes of calibration curves built up for phosphorous (P) standard solutions measured at 217.0005 and 217.0458 nm, i.e. k = (slope217.0005 nm/slope217.0458 nm). For wavelength integrated absorbance of 3 pixels, sample aspiration rate of 5.0 ml min-1, analytical curves in the 0.1 - 1.0 mg L-1 Pb range with linearity better than 0.9990 were consistently obtained. Calibration curves for P at 217.0005 and 217.0458 nm with linearity better than 0.998 were obtained. Relative standard deviations (RSD) of measurements (n = 12) in the range of 1.4 - 4.3% and 2.0 - 6.0% without and with mathematical equation correction approach were obtained respectively. The limit of detection calculated to analytical line at 217.0005 nm was 10 µg L-1 Pb. Recoveries for Pb spikes were in the 97.5 - 100% and 105 - 230% intervals with and without mathematical equation correction approach, respectively.
Resumo:
A quantitative analysis is made on the correlation ship of thermodynamic property, i.e., standard enthalpy of formation (ΔH fº) with Kier's molecular connectivity index(¹Xv),vander waal's volume (Vw) electrotopological state index (E) and refractotopological state index (R) in gaseous state of alkanes. The regression analysis reveals a significant linear correlation of standard enthalpy of formation (ΔH fº) with ¹Xv, Vw, E and R. The equations obtained by regression analysis may be used to estimate standard enthalpy of formation (ΔH fº) of alkanes in gaseous state.
Resumo:
The aim of this work was to develop and validate simple, accurate and precise spectroscopic methods (multicomponent, dual wavelength and simultaneous equations) for the simultaneous estimation and dissolution testing of ofloxacin and ornidazole tablet dosage forms. The medium of dissolution used was 900 ml of 0.01N HCl, using a paddle apparatus at a stirring rate of 50 rpm. The drug release was evaluated by developed and validated spectroscopic methods. Ofloxacin and ornidazole showed 293.4 and 319.6nm as λmax in 0.01N HCl. The methods were validated to meet requirements for a global regulatory filing. The validation included linearity, precision and accuracy. In addition, recovery studies and dissolution studies of three different tablets were compared and the results obtained show no significant difference among products.
Resumo:
This work describes a method to determine Cu at wide range concentrations in a single run without need of further dilutions employing high-resolution continuum source flame atomic absorption spectrometry. Different atomic lines for Cu at 324.754 nm, 327.396 nm, 222.570 nm, 249.215 nm and 224.426 nm were evaluated and main figures of merit established. Absorbance measurements at 324.754 nm, 249.215 nm and 224.426 nm allows the determination of Cu in the 0.07 - 5.0 mg L-1, 5.0 - 100 mg L-1 and 100 - 800 mg L-1 concentration intervals respectively with linear correlation coefficients better than 0.998. Limits of detection were 21 µg L-1, 310 µg L-1 and 1400 µg L-1 for 324.754 nm, 249.215 nm and 224.426 nm, respectively and relative standard deviations (n = 12) were £ 2.7%. The proposed method was applied to water samples spiked with Cu and the results were in agreement at a 95% of confidence level (paired t-test) with those obtained by line-source flame atomic absorption spectrometry.
Resumo:
A field experiment conducted with the irrigated rice cultivar BRS Formoso, to assess the efficiency of calcinated serpentinite as a silicon source on grain yield was utilized to study its effect on leaf blast severity and tissue sugar levels. The treatments consisted of five rates of calcinated serpentinite (0, 2, 4, 6, 8 Mg.ha-1) incorporated into the soil prior to planting. The leaf blast severity was reduced at the rate of 2.96% per ton of calcinated serpentinite. The total tissue sugar content decreased significantly as the rates of serpentinite applied increased (R² = 0.83). The relationship between the tissue sugar content and leaf blast severity was linear and positive (R² = 0.81). The decrease in leaf blast severity with increased rates of calcinated serpentinite was also linear (R²= 0.96) and can be ascribed to reduced sugar level.
Resumo:
Necrotrophic parasites of above-ground plant parts survive saprophytically, between growing seasons in host crop residues. In an experiment conducted under field conditions, the time required in months for corn and soybean residues to be completely decomposed was quantified. Residues were laid on the soil surface to simulate no-till farming. Crop debris of the two plant species collected on the harvesting day cut into pieces of 5.0cm-long and a 200g mass was added to nylon mesh bags. At monthly intervals, bags were taken to the laboratory for weighing. Corn residues were decomposed within 37.0 months and those of soybean, within 34.5 months. Hw main necrotrophic fungi diagnosed in the corn residues were Colletotrichum gramicola, Diplodia spp. and Gibberella zeae, and those in soybeans residues were Cercospora kikuchii, Colletotrichum spp, Glomerella sp. and Phomopsis spp. Thus, those periods shoulb be observed in crop rotation aimed at to eliminating contaminated residues and, consequently, the inoculum from the cultivated area.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Resumo:
The development of bioenergy on the basis of wood fuels has received considerable attention in the last decades. The combination of large forest resources and reliance on fossil fuels makes the issue of wood chips usage in Russia an actual topic for the analysis. The main objective of this study is to disclose the current state and perspectives for the production of wood chips and their usage as a source of energy in the North-West of Russia. The study utilizes an integrated approach to explore the market of wood chips on the basis of comprehensive analysis of documentation and expert opinions. The analysis of wood chips market was performed for eight regions of the North-West district of Russia within two major dimensions: its current state and perspectives in the nearest five years. The results of the study show a comprehensive picture of the wood chips market, including the potential for wood chips production, the specific features of production and consumption and the perspectives for the market development within the regions of the North-West district of Russia. The study demonstrated that the market of wood chips is underdeveloped in the North-West of Russia. The findings of the work may be used by forest companies for the strategic planning.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
ABSTRACT Inventory and prediction of cork harvest over time and space is important to forest managers who must plan and organize harvest logistics (transport, storage, etc.). Common field inventory methods including the stem density, diameter and height structure are costly and generally point (plot) based. Furthermore, the irregular horizontal structure of cork oak stands makes it difficult, if not impossible, to interpolate between points. We propose a new method to estimate cork production using digital multispectral aerial imagery. We study the spectral response of individual trees in visible and near infrared spectra and then correlate that response with cork production prior to harvest. We use ground measurements of individual trees production to evaluate the model’s predictive capacity. We propose 14 candidate variables to predict cork production based on crown size in combination with different NDVI index derivates. We use Akaike Information Criteria to choose the best among them. The best model is composed of combinations of different NDVI derivates that include red, green, and blue channels. The proposed model is 15% more accurate than a model that includes only a crown projection without any spectral information.
Resumo:
The numerous methods for calculating the potential or reference evapotranspiration (ETo or ETP) almost always do for a 24-hour period, including values of climatic parameters throughout the nocturnal period (daily averages). These results have a nil effect on transpiration, constituting the main evaporative demand process in cases of localized irrigation. The aim of the current manuscript was to come up with a model rather simplified for the calculation of diurnal daily ETo. It deals with an alternative approach based on the theoretical background of the Penman method without having to consider values of aerodynamic conductance of latent and sensible heat fluxes, as well as data of wind speed and relative humidity of the air. The comparison between the diurnal values of ETo measured in weighing lysimeters with elevated precision and estimated by either the Penman-Monteith method or the Simplified-Penman approach in study also points out a fairly consistent agreement among the potential demand calculation criteria. The Simplified-Penman approach was a feasible alternative to estimate ETo under the local meteorological conditions of two field trials. With the availability of the input data required, such a method could be employed in other climatic regions for scheduling irrigation.