930 resultados para Method of linear transformations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sodium and potassium are the common alkalis present in fly ash. Excessive amounts of fly ash alkalis can cause efflorescence problems in concrete products and raise concern about the effectiveness of the fly ash to mitigate alkali-silica reaction (ASR). The available alkali test, which is commonly used to measure fly ash alkali, takes approximately 35 days for execution and reporting. Hence, in many instances the fly ash has already been incorporated into concrete before the test results are available. This complicates the job of the fly ash marketing agencies and it leads to disputes with fly ash users who often are concerned with accepting projects that contain materials that fail to meet specification limits. The research project consisted of a lab study and a field study. The lab study focused on the available alkali test and how fly ash alkali content impacts common performance tests (mortar-bar expansion tests). Twenty-one fly ash samples were evaluated during the testing. The field study focused on the inspection and testing of selected, well documented pavement sites that contained moderately reactive fine aggregate and high-alkali fly ash. A total of nine pavement sites were evaluated. Two of the sites were control sites that did not contain fly ash. The results of the lab study indicated that the available alkali test is prone to experimental errors that cause poor agreement between testing labs. A strong (linear) relationship was observed between available alkali content and total alkali content of Class C fly ash. This relationship can be used to provide a quicker, more precise method of estimating the available alkali content. The results of the field study failed to link the use of high-alkali fly ash with the occurrence of ASR in the various concrete sites. Petrographic examination of the pavement cores indicated that Wayland sand is an ASR-sensitive aggregate. This was in good agreement with Iowa DOT field service records. It was recommended that preventative measures should be used when this source of sand is used in concrete mixtures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Few examples of habitat-modelling studies of rare and endangered species exist in the literature, although from a conservation perspective predicting their distribution would prove particularly useful. Paucity of data and lack of valid absences are the probable reasons for this shortcoming. Analytic solutions to accommodate the lack of absence include the ecological niche factor analysis (ENFA) and the use of generalized linear models (GLM) with simulated pseudo-absences. 2. In this study we tested a new approach to generating pseudo-absences, based on a preliminary ENFA habitat suitability (HS) map, for the endangered species Eryngium alpinum. This method of generating pseudo-absences was compared with two others: (i) use of a GLM with pseudo-absences generated totally at random, and (ii) use of an ENFA only. 3. The influence of two different spatial resolutions (i.e. grain) was also assessed for tackling the dilemma of quality (grain) vs. quantity (number of occurrences). Each combination of the three above-mentioned methods with the two grains generated a distinct HS map. 4. Four evaluation measures were used for comparing these HS maps: total deviance explained, best kappa, Gini coefficient and minimal predicted area (MPA). The last is a new evaluation criterion proposed in this study. 5. Results showed that (i) GLM models using ENFA-weighted pseudo-absence provide better results, except for the MPA value, and that (ii) quality (spatial resolution and locational accuracy) of the data appears to be more important than quantity (number of occurrences). Furthermore, the proposed MPA value is suggested as a useful measure of model evaluation when used to complement classical statistical measures. 6. Synthesis and applications. We suggest that the use of ENFA-weighted pseudo-absence is a possible way to enhance the quality of GLM-based potential distribution maps and that data quality (i.e. spatial resolution) prevails over quantity (i.e. number of data). Increased accuracy of potential distribution maps could help to define better suitable areas for species protection and reintroduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RÉSUMÉ DE THÈSE Au cours de ma thèse, je me suis intéressée aux causes physiologiques du vieillissement en utilisant les fourmis comme modèle. Les trois castes de fourmis - les mâles, les ouvrières et les reines - présentent des longévités très différentes, tout en étant génétiquement identiques. Ceci implique que les différences de longévité sont dues à des variations entre castes dans le pattern d'expression de gènes. Mon travail chez la fourmi a consisté d'une part à mettre en place les outils pour identifier de tels gènes à grande échelle, de l'autre à étudier le rôle de gènes et de mécanismes qui affectent la longévité chez d'autres espèces. Pour identifier de nouveaux gènes potentiellement impliqués dans le vieillissement, nous avons développé des puces à ADN. Cette technique permet la comparaison du niveau d'expression de milliers de gènes entre deux échantillons. L'application de cette méthode aux reines et ouvrières adultes nous a jusqu'à présent permis d'identifier neuf gènes surexprimés chez les reines. Trois d'entre eux sont potentiellement impliqués dans le maintien et la réparation du soma, deux processus qui sont supposés avoir un impact crucial sur la longévité. Parmi les mécanismes impliqués dans le vieillissement chez d'autres espèces, nous nous sommes principalement intéressés aux télomères, qui sont les extrémités des chromosomes. Chez les vertébrés, les télomères se raccourcissent à chaque division cellulaire, entre autres parce que l'ADN polymérase ne peut répliquer cette partie des chromosomes en entier. Or des télomères courts entravent la prolifération des cellules et peuvent même induire l'apoptose, ce qui pourrait se répercuter sur la capacité des organismes à régénérer des tissus. J'ai pu montrer que chez les fourmis mâles (la caste qui vit le moins longtemps) les télomères se raccourcissent beaucoup plus vite que chez les reines et les ouvrières. L'explication la plus plausible pour cette différence est que les mâles, étant adapté à une vie très éphémère, n'investissent qu'un minimum d'énergie dans la machinerie de maintenance qui assure le bon fonctionnement des cellules. Ces résultats sont intéressants car ils permettent pour la première fois de faire le lien entre les théories évolutives du vieillissement et la biologie des télomères. THESIS ABSTRACT During my thesis I used ants as a model to study the proximate (i.e., molecular) causes of ageing and lifespan determination. Ant queens, workers and males differ tremendously in lifespan, although all three castes are genetically identical. Importantly, this implies that genes and molecular pathways responsible for modulating lifespan are regulated in a caste-specific manner. To find new genes potentially involved in ageing, we first constructed 371-gene-cDNA microarrays for the ant L. niger. This molecular tool can be used to survey the relative gene expression levels of two samples for thousands of genes simultaneously. By applying this method to adult queens and workers we identified nine genes that are overexpressed in queens. Three of them are putatively involved in somatic maintenance and repair, two processes that have been previously suggested as important for ageing and lifespan determination. We expect to identify many more candidate genes in the near future by using the 9000-gene fire ant microarrays we have recently developed. We also investigated whether factors linked to ageing in other organisms could affect lifespan determination in ants. One project was on telomeres, the ends of linear chromosomes. For various reasons telomeres shorten with every cell division. Since short telomeres can lead to cellular defects such as impaired cell division, telomeres have been hypothesized as playing a role in ageing. We tested whether telomere length in ant somatic tissues correlates with caste-specific lifespan in young adults. The short-lived L. niger mates did indeed have significantly shorter telomeres than the longer-lived queens and workers, probably because telomere attrition is faster in males than in queens and workers. Queens did not, however, have longer telomeres than the shorter-lived workers. These findings are consistent with the idea that telomere length may play a role in ageing under some circumstances, but they also clearly demonstrate that other factors must be involved. We argue that sex-specific telomere length patterns in ants ultimately reflect adaptive differences in the level of somatic maintenance between males and females, and thus create a link between telomere biology and the evolutionary theory of ageing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The accumulation of aqueous pollutants is becoming a global problem. The search for suitable methods and/or combinations of water treatment processes is a task that can slow down and stop the process of water pollution. In this work, the method of wet oxidation was considered as an appropriate technique for the elimination of the impurities present in paper mill process waters. It has been shown that, when combined with traditional wastewater treatment processes, wet oxidation offers many advantages. The combination of coagulation and wet oxidation offers a new opportunity for the improvement of the quality of wastewater designated for discharge or recycling. First of all, the utilization of coagulated sludge via wet oxidation provides a conditioning process for the sludge, i.e. dewatering, which is rather difficult to carry out with untreated waste. Secondly, Fe2(SO4)3, which is employed earlier as a coagulant, transforms the conventional wet oxidation process into a catalytic one. The use of coagulation as the post-treatment for wet oxidation can offer the possibility of the brown hue that usually accompanies the partial oxidation to be reduced. As a result, the supernatant is less colored and also contains a rather low amount of Fe ions to beconsidered for recycling inside mills. The thickened part that consists of metal ions is then recycled back to the wet oxidation system. It was also observed that wet oxidation is favorable for the degradation of pitch substances (LWEs) and lignin that are present in the process waters of paper mills. Rather low operating temperatures are needed for wet oxidation in order to destruct LWEs. The oxidation in the alkaline media provides not only the faster elimination of pitch and lignin but also significantly improves the biodegradable characteristics of wastewater that contains lignin and pitch substances. During the course of the kinetic studies, a model, which can predict the enhancements of the biodegradability of wastewater, was elaborated. The model includes lumped concentrations suchas the chemical oxygen demand and biochemical oxygen demand and reflects a generalized reaction network of oxidative transformations. Later developments incorporated a new lump, the immediately available biochemical oxygen demand, which increased the fidelity of the predictions made by the model. Since changes in biodegradability occur simultaneously with the destruction of LWEs, an attempt was made to combine these two facts for modeling purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coherent anti-Stokes Raman scattering is the powerful method of laser spectroscopy in which significant successes are achieved. However, the non-linear nature of CARS complicates the analysis of the received spectra. The objective of this Thesis is to develop a new phase retrieval algorithm for CARS. It utilizes the maximum entropy method and the new wavelet approach for spectroscopic background correction of a phase function. The method was developed to be easily automated and used on a large number of spectra of different substances.. The algorithm was successfully tested on experimental data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Building on the instrumental model of group conflict (IMGC), the present experiment investigates the support for discriminatory and meritocratic method of selections at university in a sample of local and immigrant students. Results showed that local students were supporting in a larger proportion selection method that favors them over immigrants in comparison to method that consists in selecting the best applicants without considering his/her origin. Supporting the assumption of the IMGC, this effect was stronger for locals who perceived immigrants as competing for resources. Immigrant students supported more strongly the meritocratic selection method than the one that discriminated them. However, contrasting with the assumption of the IMGC, this effect was only present in students who perceived immigrants as weakly competing for locals' resources. Results demonstrate that selection methods used at university can be perceived differently depending on students' origin. Further, they suggest that the mechanisms underlying the perception of discriminatory and meritocratic selection methods differ between local and immigrant students. Hence, the present experiment makes a theoretical contribution to the IMGC by delimiting its assumptions to the ingroup facing a competitive situation with a relevant outgroup. Practical implication for universities recruitment policies are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A liquid chromatography-tandem mass spectrometry method with atmospheric pressure chemical ionization (LC-APCI/MS/MS) was validated for the determination of etoricoxib in human plasma using antipyrin as internal standard, followed by on-line solid-phase extraction. The method was performed on a Luna C18 column and the mobile phase consisted of acetonitrile:water (95:5, v/v)/ammonium acetate (pH 4.0; 10 mM), run at a flow rate of 0.6 mL/min. The method was linear in the range of 1-5000 ng/mL (r²>0.99). The lower limit of quantitation was 1 ng/mL. The recoveries were within 93.72-96.18%. Moreover, method validation demonstrated acceptable results for the precision, accuracy and stability studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A rapid and sensitive method using high performance liquid chromatography has been developed and validated for the simultaneous determination of non-steroidal anti-inflammatory drugs (NSAIDs) in pharmaceutical formulations and human serum. Six NSAIDs including: naproxen sodium, diclofenac sodium, meloxicam, flurbiprofen, tiaprofenic and mefenamic acid were analyzed simultaneously in presence of ibuprofen as internal standard on Mediterranea C18 (5 µm, 250 x 0.46 mm) column. Mobile phase comprised of methanol: acetonitrile: H2O (60:20:20, v/v; pH 3.35) and pumped at a flow rate of 1 mL min-1 using 265 nm UV detection. The method was linear over a concentration range of 0.25-50 µg mL-1 (r² = 0.9999).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this dissertation, active galactic nuclei (AGN) are discussed, as they are seen with the high-resolution radio-astronomical technique called Very Long Baseline Interferometry (VLBI). This observational technique provides very high angular resolution (_ 10−300 = 1 milliarcsecond). VLBI observations, performed at different radio frequencies (multi-frequency VLBI), allow to penetrate deep into the core of an AGN to reveal an otherwise obscured inner part of the jet and the vicinity of the AGN’s central engine. Multi-frequency VLBI data are used to scrutinize the structure and evolution of the jet, as well as the distribution of the polarized emission. These data can help to derive the properties of the plasma and the magnetic field, and to provide constraints to the jet composition and the parameters of emission mechanisms. Also VLBI data can be used for testing the possible physical processes in the jet by comparing observational results with results of numerical simulations. The work presented in this thesis contributes to different aspects of AGN physics studies, as well as to the methodology of VLBI data reduction. In particular, Paper I reports evidence of optical and radio emission of AGN coming from the same region in the inner jet. This result was obtained via simultaneous observations of linear polarization in the optical and in radio using VLBI technique of a sample of AGN. Papers II and III describe, in detail, the jet kinematics of the blazar 0716+714, based on multi-frequency data, and reveal a peculiar kinematic pattern: plasma in the inner jet appears to move substantially faster that that in the large-scale jet. This peculiarity is explained by the jet bending, in Paper III. Also, Paper III presents a test of the new imaging technique for VLBI data, the Generalized Maximum Entropy Method (GMEM), with the observed (not simulated) data and compares its results with the conventional imaging. Papers IV and V report the results of observations of the circularly polarized (CP) emission in AGN at small spatial scales. In particular, Paper IV presents values of the core CP for 41 AGN at 15, 22 and 43 GHz, obtained with the help of the standard Gain transfer (GT) method, which was previously developed by D. Homan and J.Wardle for the calibration of multi-source VLBI observations. This method was developed for long multi-source observations, when many AGN are observed in a single VLBI run. In contrast, in Paper V, an attempt is made to apply the GT method to single-source VLBI observations. In such observations, the object list would include only a few sources: a target source and two or three calibrators, and it lasts much shorter than the multi-source experiment. For the CP calibration of a single-source observation, it is necessary to have a source with zero or known CP as one of the calibrators. If the archival observations included such a source to the list of calibrators, the GT could also be used for the archival data, increasing a list of known AGN with the CP at small spatial scale. Paper V contains also calculation of contributions of different sourced of errors to the uncertainty of the final result, and presents the first results for the blazar 0716+714.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work was to develop and validate a mathematical model to estimate the duration of cotton (Gossypium hirsutum L. r. latifolium hutch) cycle in the State of Goiás, Brazil, by applying the method of growing degree-days (GD), and considering, simultaneously, its time-space variation. The model was developed as a linear combination of elevation, latitude, longitude, and Fourier series of time variation. The model parameters were adjusted by using multiple-linear regression to the observed GD accumulated with air temperature in the range of 15°C to 40°C. The minimum and maximum temperature records used to calculate the GD were obtained from 21 meteorological stations, considering data varying from 8 to 20 years of observation. The coefficient of determination, resulting from the comparison between the estimated and calculated GD along the year was 0.84. Model validation was done by comparing estimated and measured crop cycle in the period from cotton germination to the stage when 90 percent of bolls were opened in commercial crop fields. Comparative results showed that the model performed very well, as indicated by the Pearson correlation coefficient of 0.90 and Willmott agreement index of 0.94, resulting in a performance index of 0.85.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most studies on measures of transpiration of plants, especially woody fruit, relies on methods of heat supply in the trunk. This study aimed to calibrate the Thermal Dissipation Probe Method (TDP) to estimate the transpiration, study the effects of natural thermal gradients and determine the relation between outside diameter and area of xylem in 'Valencia' orange young plants. TDP were installed in 40 orange plants of 15 months old, planted in boxes of 500 L, in a greenhouse. It was tested the correction of the natural thermal differences (DTN) for the estimation based on two unheated probes. The area of the conductive section was related to the outside diameter of the stem by means of polynomial regression. The equation for estimation of sap flow was calibrated having as standard lysimeter measures of a representative plant. The angular coefficient of the equation for estimating sap flow was adjusted by minimizing the absolute deviation between the sap flow and daily transpiration measured by lysimeter. Based on these results, it was concluded that the method of TDP, adjusting the original calibration and correction of the DTN, was effective in transpiration assessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evapotranspiration is the process of water loss of vegetated soil due to evaporation and transpiration, and it may be estimated by various empirical methods. This study had the objective to carry out the evaluation of the performance of the following methods: Blaney-Criddle, Jensen-Haise, Linacre, Solar Radiation, Hargreaves-Samani, Makkink, Thornthwaite, Camargo, Priestley-Taylor and Original Penman in the estimation of the potential evapotranspiration when compared to the Penman-Monteith standard method (FAO56) to the climatic conditions of Uberaba, state of Minas Gerais, Brazil. A set of 21 years monthly data (1990 to 2010) was used, working with the climatic elements: temperature, relative humidity, wind speed and insolation. The empirical methods to estimate reference evapotranspiration were compared with the standard method using linear regression, simple statistical analysis, Willmott agreement index (d) and performance index (c). The methods Makkink and Camargo showed the best performance, with "c" values ​​of 0.75 and 0.66, respectively. The Hargreaves-Samani method presented a better linear relation with the standard method, with a correlation coefficient (r) of 0.88.