882 resultados para the least squares distance method
Resumo:
The determination of zirconium-hafnium mixtures is one of the most critical problem of the analytical chemistry, on account of the close similarity of their chemical properties. The spectrophotometric determination proposed by Yagodin et al. show not many practical applications due to the significant spectral interference on the 200-220 nm region. In this work we propound the use of a multivariate calibration method called partial least squares ( PLS ) for colorimetric determination of these mixtures. By using PLS and 16 calibration mixtures we obtained a model which permits determination of zirconium and hafnium with accuracy of about 1-2% and 10-20%, respectively. Using conventional univariate calibration the inaccuracy of the determination is about 10-25% for zirconium and above 57% for hafnium.
Resumo:
The knowledge on the optics of fogbows is scarce, and their polarization characteristics have never been measured to our knowledge. To fill this gap we measured the polarization features of 16 fogbows during the Beringia 2005 Arctic polar research expedition by imaging polarimetry in the red, green and blue spectral ranges. We present here the first polarization patterns of the fogbow. In the patterns of the degree of linear polarization p, fogbows and their supernumerary bows are best visible in the red spectral range due to the least dilution of fogbow light by light scattered in air. In the patterns of the angle of polarization α fogbows are practically not discernible because their α-pattern is the same as that of the sky: the direction of polarization is perpendicular to the plane of scattering and is parallel to the arc of the bow, independently of the wavelength. Fogbows and their supernumeraries were best seen in the patterns of the polarized radiance. In these patterns the angular distance δ between the peaks of the primary and the first supernumerary and the angular width σ of the primary bow were determined along different radii from the center of the bow. δ ranged between 6.08° and 13.41° , while σ changed from 5.25° to 19.47° . Certain fogbows were relatively homogeneous, meaning small variations of δ and σ along their bows. Other fogbows were heterogeneous, possessing quite variable δ- and σ-values along their bows. This variability could be a consequence of the characteristics of the high Arctic with open waters within the ice shield resulting in the spatiotemporal change of the droplet size within the fog
Resumo:
Background Little is known about the types of ‘sit less, move more’ strategies that appeal to office employees, or what factors influence their use. This study assessed the uptake of strategies in Spanish university office employees engaged in an intervention, and those factors that enabled or limited strategy uptake. Methods The study used a mixed method design. Semi-structured interviews were conducted with academics and administrators (n = 12; 44 ± 12 mean SD age; 6 women) at three points across the five-month intervention, and data used to identify factors that influenced the uptake of strategies. Employees who finished the intervention then completed a survey rating (n = 88; 42 ± 8 mean SD age; 51 women) the extent to which strategies were used [never (1) to usually (4)]; additional survey items (generated from interviewee data) rated the impact of factors that enabled or limited strategy uptake [no influence (1) to very strong influence (4)]. Survey score distributions and averages were calculated and findings triangulated with interview data. Results Relative to baseline, 67% of the sample increased step counts post intervention (n = 59); 60% decreased occupational sitting (n = 53). ‘Active work tasks’ and ‘increases in walking intensity’ were the strategies most frequently used by employees (89% and 94% sometimes or usually utilised these strategies); ‘walk-talk meetings’ and ‘lunchtime walking groups’ were the least used (80% and 96% hardly ever or never utilised these strategies). ‘Sitting time and step count logging’ was the most important enabler of behaviour change (mean survey score of 3.1 ± 0.8); interviewees highlighted the motivational value of being able to view logged data through visual graphics in a dedicated website, and gain feedback on progress against set goals. ‘Screen based work’ (mean survey score of 3.2 ± 0.8) was the most significant barrier limiting the uptake of strategies. Inherent time pressures and cultural norms that dictated sedentary work practices limited the adoption of ‘walk-talk meetings’ and ‘lunch time walking groups’. Conclusions The findings provide practical insights into which strategies and influences practitioners need to target to maximise the impact of ‘sit less, move more’ occupational intervention strategies.
Resumo:
During the last few years, the discussion on the marginal social costs of transportation has been active. Applying the externalities as a tool to control transport would fulfil the polluter pays principle and simultaneously create a fair control method between the transport modes. This report presents the results of two calculation algorithms developed to estimate the marginal social costs based on the externalities of air pollution. The first algorithm calculates the future scenarios of sea transport traffic externalities until 2015 in the Gulf of Finland. The second algorithm calculates the externalities of Russian passenger car transit traffic via Finland by taking into account both sea and road transport. The algorithm estimates the ship-originated emissions of carbon dioxide (CO2), nitrogen oxides (NOx), sulphur oxides (SOx), particulates (PM) and the externalities for each year from 2007 to 2015. The total NOx emissions in the Gulf of Finland from the six ship types were almost 75.7 kilotons (Table 5.2) in 2007. The ship types are: passenger (including cruisers and ROPAX vessels), tanker, general cargo, Ro-Ro, container and bulk vessels. Due to the increase of traffic, the estimation for NOx emissions for 2015 is 112 kilotons. The NOx emission estimation for the whole Baltic Sea shipping is 370 kilotons in 2006 (Stipa & al, 2007). The total marginal social costs due to ship-originated CO2, NOx, SOx and PM emissions in the GOF were calculated to almost 175 million Euros in 2007. The costs will increase to nearly 214 million Euros in 2015 due to the traffic growth. The major part of the externalities is due to CO2 emissions. If we neglect the CO2 emissions by extracting the CO2 externalities from the results, we get the total externalities of 57 million Euros in 2007. After eight years (2015), the externalities would be 28 % lower, 41 million Euros (Table 8.1). This is the result of the sulphur emissions reducing regulation of marine fuels. The majority of the new car transit goes through Finland to Russia due to the lack of port capacity in Russia. The amount of cars was 339 620 vehicles (Statistics of Finnish Customs 2008) in 2005. The externalities are calculated for the transportation of passenger vehicles as follows: by ship to a Finnish port and, after that, by trucks to the Russian border checkpoint. The externalities are between 2 – 3 million Euros (year 2000 cost level) for each route. The ports included in the calculations are Hamina, Hanko, Kotka and Turku. With the Euro-3 standard trucks, the port of Hanko would be the best choice to transport the vehicles. This is because of lower emissions by new trucks and the saved transport distance of a ship. If the trucks are more polluting Euro 1 level trucks, the port of Kotka would be the best choice. This indicates that the truck emissions have a considerable effect on the externalities and that the transportation of light cargo, such as passenger cars by ship, produces considerably high emission externalities. The emission externalities approach offers a new insight for valuing the multiple traffic modes. However, the calculation of the marginal social costs based on the air emission externalities should not be regarded as a ready-made calculation system. The system is clearly in the need of some improvement but it can already be considered as a potential tool for political decision making.
Resumo:
The research on language equations has been active during last decades. Compared to the equations on words the equations on languages are much more difficult to solve. Even very simple equations that are easy to solve for words can be very hard for languages. In this thesis we study two of such equations, namely commutation and conjugacy equations. We study these equations on some limited special cases and compare some of these results to the solutions of corresponding equations on words. For both equations we study the maximal solutions, the centralizer and the conjugator. We present a fixed point method that we can use to search these maximal solutions and analyze the reasons why this method is not successful for all languages. We give also several examples to illustrate the behaviour of this method.
Estudo QSPR sobre os coeficientes de partição: descritores mecânico-quânticos e análise multivariada
Resumo:
Quantum chemistry and multivariate analysis were used to estimate the partition coefficients between n-octanol and water for a serie of 188 compounds, with the values of the q 2 until 0.86 for crossvalidation test. The quantum-mechanical descriptors are obtained with ab initio calculation, using the solvation effects of the Polarizable Continuum Method. Two different Hartree-Fock bases were used, and two different ways for simulating solvent cavity formation. The results for each of the cases were analised, and each methodology proposed is indicated for particular case.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
A model based on chemical structure was developed for the accurate prediction of octanol/water partition coefficient (K OW) of polychlorinated biphenyls (PCBs), which are molecules of environmental interest. Partial least squares (PLS) was used to build the regression model. Topological indices were used as molecular descriptors. Variable selection was performed by Hierarchical Cluster Analysis (HCA). In the modeling process, the experimental K OW measured for 30 PCBs by thin-layer chromatography - retention time (TLC-RT) has been used. The developed model (Q² = 0,990 and r² = 0,994) was used to estimate the log K OW values for the 179 PCB congeners whose K OW data have not yet been measured by TLC-RT method. The results showed that topological indices can be very useful to predict the K OW.
Resumo:
Pretreatment of lignocellulosic materials is essential for bioconversion because of the various physical and chemical barriers that greatly inhibit their susceptibility to bioprocesses such as hydrolysis and fermentation. The aim of this article is to review some of the most important pretreatment methods developed to date to enhance the conversion of lignocellulosics. Steam explosion, which precludes the treatment of biomass with high-pressure steam under optimal conditions, is presented as the pretreatment method of choice and its mode of action on lignocellulosics is discussed. The optimal pretreatment conditions for a given plant biomass are defined as those in which the best substrate for hydrolysis is obtained with the least amount of soluble sugars lost to side reactions such as dehydration. Therefore, pretreatment optimization results from a compromise between two opposite trends because hemicellulose recovery in acid hydrolysates can only be maximized at lower pretreatment severities, whereas the development of substrate accessibility requires more drastic pretreatment conditions in which sugar losses are inevitable. To account for this heterogeneity, the importance of several process-oriented parameters is discussed in detail, such as the pretreatment temperature, residence time into the steam reactor, use of an acid catalyst, susceptibility of the pretreated biomass to bioconversion, and process design.
Resumo:
A simple method was proposed for determination of paracetamol and ibuprofen in tablets, based on UV measurements and partial least squares. The procedure was performed at pH 10.5, in the concentration ranges 3.00-15.00 µg ml-1 (paracetamol) and 2.40-12.00 µg ml-1 (ibuprofen). The model was able to predict paracetamol and ibuprofen in synthetic mixtures with root mean squares errors of prediction of 0.12 and 0.17 µg ml-1, respectively. Figures of merit (sensitivity, limit of detection and precision) were also estimated. The results achieved for the determination of these drugs in pharmaceutical formulations were in agreement with label claims and verified by HPLC.
Resumo:
Least-squares support vector machines (LS-SVM) were used as an alternative multivariate calibration method for the simultaneous quantification of some common adulterants found in powdered milk samples, using near-infrared spectroscopy. Excellent models were built using LS-SVM for determining R², RMSECV and RMSEP values. LS-SVMs show superior performance for quantifying starch, whey and sucrose in powdered milk samples in relation to PLSR. This study shows that it is possible to determine precisely the amount of one and two common adulterants simultaneously in powdered milk samples using LS-SVM and NIR spectra.
Resumo:
EPR users often face the problem of extracting information from frequently low-resolution and complex EPR spectra. Simulation programs that provide a series of parameters, characteristic of the investigated system, have been used to achieve this goal. This work describes the general aspects of one of those programs, the NLSL program, used to fit EPR spectra applying a nonlinear least squares method. Several motion regimes of the probes are included in this computational tool, covering a broad range of spectral changes. The meanings of the different parameters and rotational diffusion models are discussed. The anisotropic case is also treated by including an orienting potential and order parameters. Some examples are presented in order to show its applicability in different systems.
Resumo:
A martensitic single crystal Cu-23.95Zn-3.62(wt.%)Al alloy was obtained melting pure Cu, Zn and Al using Bridgman's method. The martensitic phase (monoclinic) can present up to 24 variants, and orienting the surface according to a certain plane is a very hard task. The single crystal was submitted to 8 tons of tension (stress) along the longitudinal direction to reduce the number of variants and facilitate the surface orientation according to the desired plane. This single crystal was oriented using the Laüe back-reflection method to give surfaces with the following oriented crystallographic planes: (010), (120) and (130). It was observed that the tension stress was applied along the [010] direction.
Resumo:
Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.
Resumo:
A new analytical method was developed to non-destructively determine pH and degree of polymerisation (DP) of cellulose in fibres in 19th 20th century painting canvases, and to identify the fibre type: cotton, linen, hemp, ramie or jute. The method is based on NIR spectroscopy and multivariate data analysis, while for calibration and validation a reference collection of 199 historical canvas samples was used. The reference collection was analysed destructively using microscopy and chemical analytical methods. Partial least squares regression was used to build quantitative methods to determine pH and DP, and linear discriminant analysis was used to determine the fibre type. To interpret the obtained chemical information, an expert assessment panel developed a categorisation system to discriminate between canvases that may not be fit to withstand excessive mechanical stress, e.g. transportation. The limiting DP for this category was found to be 600. With the new method and categorisation system, canvases of 12 Dalí paintings from the Fundació Gala-Salvador Dalí (Figueres, Spain) were non-destructively analysed for pH, DP and fibre type, and their fitness determined, which informs conservation recommendations. The study demonstrates that collection-wide canvas condition surveys can be performed efficiently and non-destructively, which could significantly improve collection management.