952 resultados para Robust epipolar-geometry estimation
Resumo:
This work is aimed at evaluating the physicochemical, physical, chromatic, microbiological, and sensorial stability of a non-dairy dessert elaborated with soy, guava juice, and oligofructose for 60 days at refrigerated storage as well as to estimate its shelf life time. The titrable acidity, pH, instrumental color, water activity, ascorbic acid, and physical stability were measured. Panelists (n = 50) from the campus community used a hedonic scale to assess the acceptance, purchase intent, creaminess, flavor, taste, acidity, color, and overall appearance of the dessert during 60 days. The data showed that the parameters differed significantly (p < 0.05) from the initial time, and they could be fitted in mathematical equations with coefficient of determination above 71%, aiming to consider them suitable for prediction purposes. Creaminess and acceptance did not differ statistically in the 60-day period; taste, flavor, and acidity kept a suitable hedonic score during storage. Notwithstanding, the sample showed good physical stability against gravity and presented more than 15% of the Brazilian Daily Recommended Value of copper, iron, and ascorbic acid. The product shelf life estimation found was 79 days considering the overall acceptance, acceptance index and purchase intent.
Resumo:
In a fish paste made with cooked Brazilian flathead (Percophis brasiliensis), glycerol (17%), sodium chloride (1.5%) and potassium sorbate (0.1%) the following acid percentages: 0.2; 0.4; 0.6; 0.8, 1 and 1.5% w/w were incorporated to determine the relationship between added acetic acid and the sensorially perceived intensity, and the effects of the combination of sweet-acid tastes. Tests for paired comparison, ranking and structured verbal scales for sweet and acid attributes and psychophysical test were carried out. There was a perceptible difference among samples for differences of 0.4 units of acid concentration. Samples indicated as sweeter by 89.47% of the judges were those containing a lesser acid concentration. A reduction in glycerol sweetness when increasing acid levels was observed. Acetic acid reduced the sweetness of glycerol and inversely glycerol reduced the acidity of acetic acid. The data obtained with the magnitude estimation test agree with Steven's law.
Resumo:
Simultaneous Distillation-Extraction (SDE) and headspace-solid phase microextraction (HS-SPME) combined with GC-FID and GC-MS were used to analyze volatile compounds from plum (Prunus domestica L. cv. Horvin) and to estimate the most odor-active compounds by application of the Odor Activity Values (OAV). The analyses led to the identification of 148 components, including 58 esters, 23 terpenoids, 14 aldehydes, 11 alcohols, 10 ketones, 9 alkanes, 7 acids, 4 lactones, 3 phenols, and other 9 compounds of different structures. According to the results of SDE-GC-MS, SPME-GC-MS and OAV, ethyl 2-methylbutanoate, hexyl acetate, (E)-2-nonenal, ethyl butanoate, (E)-2-decenal, ethyl hexanoate, nonanal, decanal, (E)-β-ionone, Γ-dodecalactone, (Z)-3-hexenyl acetate, pentyl acetate, linalool, Γ-decalactone, butyl acetate, limonene, propyl acetate, Δ-decalactone, diethyl sulfide, (E)-2-hexenyl acetate, ethyl heptanoate, (Z)-3-hexenol, (Z)-3-hexenyl hexanoate, eugenol, (E)-2-hexenal, ethyl pentanoate, hexyl 2-methylbutanoate, isopentyl hexanoate, 1-hexanol, Γ-nonalactone, myrcene, octyl acetate, phenylacetaldehyde, 1-butanol, isobutyl acetate, (E)-2-heptenal, octadecanal, and nerol are characteristic odor active compounds in fresh plums since they showed concentrations far above their odor thresholds.
Resumo:
This thesis addresses the coolability of porous debris beds in the context of severe accident management of nuclear power reactors. In a hypothetical severe accident at a Nordic-type boiling water reactor, the lower drywell of the containment is flooded, for the purpose of cooling the core melt discharged from the reactor pressure vessel in a water pool. The melt is fragmented and solidified in the pool, ultimately forming a porous debris bed that generates decay heat. The properties of the bed determine the limiting value for the heat flux that can be removed from the debris to the surrounding water without the risk of re-melting. The coolability of porous debris beds has been investigated experimentally by measuring the dryout power in electrically heated test beds that have different geometries. The geometries represent the debris bed shapes that may form in an accident scenario. The focus is especially on heap-like, realistic geometries which facilitate the multi-dimensional infiltration (flooding) of coolant into the bed. Spherical and irregular particles have been used to simulate the debris. The experiments have been modeled using 2D and 3D simulation codes applicable to fluid flow and heat transfer in porous media. Based on the experimental and simulation results, an interpretation of the dryout behavior in complex debris bed geometries is presented, and the validity of the codes and models for dryout predictions is evaluated. According to the experimental and simulation results, the coolability of the debris bed depends on both the flooding mode and the height of the bed. In the experiments, it was found that multi-dimensional flooding increases the dryout heat flux and coolability in a heap-shaped debris bed by 47–58% compared to the dryout heat flux of a classical, top-flooded bed of the same height. However, heap-like beds are higher than flat, top-flooded beds, which results in the formation of larger steam flux at the top of the bed. This counteracts the effect of the multi-dimensional flooding. Based on the measured dryout heat fluxes, the maximum height of a heap-like bed can only be about 1.5 times the height of a top-flooded, cylindrical bed in order to preserve the direct benefit from the multi-dimensional flooding. In addition, studies were conducted to evaluate the hydrodynamically representative effective particle diameter, which is applied in simulation models to describe debris beds that consist of irregular particles with considerable size variation. The results suggest that the effective diameter is small, closest to the mean diameter based on the number or length of particles.
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.
Resumo:
This paper presents a methodology for calculating the industrial equilibrium exchange rate, which is defined as the one enabling exporters of state-of-the-art manufactured goods to be competitive abroad. The first section highlights the causes and problems of overvalued exchange rates, particularly the Dutch disease issue, which is neutralized when the exchange rate strikes the industrial equilibrium level. This level is defined by the ratio between the unit labor cost in the country under consideration and in competing countries. Finally, the evolution of this exchange rate in the Brazilian economy is estimated.
Resumo:
The aim of this paper is to discuss the trend of overvaluation of the Brazilian currency in the 2000s, presenting an econometric model to estimate the real exchange rate (RER) and which should be a reference level of the RER to guide long-term economic policy. In the econometric model, we consider long-term structural and short-term components, both of which may be responsible for explaining overvaluation trend of the Brazilian currency. Our econometric exercise confirms that the Brazilian currency had been persistently overvalued throughout almost all of the period under analysis, and we suggest that the long-term reference level of the real exchange rate was reached in 2004. In July 2014, the average nominal exchange rate should have been around 2.90 Brazilian reais per dollar (against an observed nominal rate of 2.22 Brazilian reais per dollar) to achieve the 2004 real reference level (average of the year). That is, according to our estimates, in July 2014 the Brazilian real was overvalued at 30.6 per cent in real terms relative to the reference level. Based on these findings we conclude the paper suggesting a mix of policy instruments that should have been used in order to reverse the overvaluation trend of the Brazilian real exchange rate, including a target for reaching a real exchange rate in the medium and the long-run which would favor resource allocation toward more technological intensive sectors.
Resumo:
Since its discovery, chaos has been a very interesting and challenging topic of research. Many great minds spent their entire lives trying to give some rules to it. Nowadays, thanks to the research of last century and the advent of computers, it is possible to predict chaotic phenomena of nature for a certain limited amount of time. The aim of this study is to present a recently discovered method for the parameter estimation of the chaotic dynamical system models via the correlation integral likelihood, and give some hints for a more optimized use of it, together with a possible application to the industry. The main part of our study concerned two chaotic attractors whose general behaviour is diff erent, in order to capture eventual di fferences in the results. In the various simulations that we performed, the initial conditions have been changed in a quite exhaustive way. The results obtained show that, under certain conditions, this method works very well in all the case. In particular, it came out that the most important aspect is to be very careful while creating the training set and the empirical likelihood, since a lack of information in this part of the procedure leads to low quality results.
Resumo:
This thesis aims to investigate pricing of liquidity risks in London Stock Exchange. Liquidity Adjusted Capital Asset Pricing Model i.e. LCAPM developed by Acharya and Pedersen (2005) is being applied to test the influence of various liquidity risks on stock returns in London Stock Exchange. The Liquidity Adjusted Capital Asset Pricing model provides a unified framework for the testing of liquidity risks. All the common stocks listed and delisted for the period of 2000 to 2014 are included in the data sample. The study has incorporated three different measures of liquidity – Percent Quoted Spread, Amihud (2002) and Turnover. The reason behind the application of three different liquidity measures is the multi-dimensional nature of liquidity. Firm fixed effects panel regression is applied for the estimation of LCAPM. However, the results are robust according to Fama-Macbeth regressions. The results of the study indicates that liquidity risks in the form of (i) level of liquidity, (ii) commonality in liquidity (iii) flight to liquidity, (iv) depressed wealth effect and market return as well as aggregate liquidity risk are priced at London Stock Exchange. However, the results are sensitive to the choice of liquidity measures.
Resumo:
Traduction de Wylie, rédigée par Li Shan lan ; préfaces Chinoises des deux traducteurs (1859) ; préface anglaise, écrite à Shang hai par A. Wylie (juillet 1859). Liste de termes techniques en anglais et en Chinois. Gravé à la maison Mo hai (1859).18 livres.
Resumo:
Original.
Resumo:
Optimization of wave functions in quantum Monte Carlo is a difficult task because the statistical uncertainty inherent to the technique makes the absolute determination of the global minimum difficult. To optimize these wave functions we generate a large number of possible minima using many independently generated Monte Carlo ensembles and perform a conjugate gradient optimization. Then we construct histograms of the resulting nominally optimal parameter sets and "filter" them to identify which parameter sets "go together" to generate a local minimum. We follow with correlated-sampling verification runs to find the global minimum. We illustrate this technique for variance and variational energy optimization for a variety of wave functions for small systellls. For such optimized wave functions we calculate the variational energy and variance as well as various non-differential properties. The optimizations are either on par with or superior to determinations in the literature. Furthermore, we show that this technique is sufficiently robust that for molecules one may determine the optimal geometry at tIle same time as one optimizes the variational energy.