975 resultados para Log-log Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pasminco Century Mine has developed a geophysical logging system to provide new data for ore mining/grade control and the generation of Short Term Models for mine planning. Previous work indicated the applicability of petrophysical logging for lithology prediction, however, the automation of the method was not considered reliable enough for the development of a mining model. A test survey was undertaken using two diamond drilled control holes and eight percussion holes. All holes were logged with natural gamma, magnetic susceptibility and density. Calibration of the LogTrans auto-interpretation software using only natural gamma and magnetic susceptibility indicated that both lithology and stratigraphy could be predicted. Development of a capability to enforce stratigraphic order within LogTrans increased the reliability and accuracy of interpretations. After the completion of a feasibility program, Century Mine has invested in a dedicated logging vehicle to log blast holes as well as for use in in-fill drilling programs. Future refinement of the system may lead to the development of GPS controlled excavators for mining ore.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dispersal, or the amount of dispersion between an individual's birthplace and that of its offspring, is of great importance in population biology, behavioural ecology and conservation, however, obtaining direct estimates from field data on natural populations can be problematic. The prickly forest skink, Gnypetoscincus queenslandiae, is a rainforest endemic skink from the wet tropics of Australia. Because of its log-dwelling habits and lack of definite nesting sites, a demographic estimate of dispersal distance is difficult to obtain. Neighbourhood size, defined as 4 piD sigma (2) (where D is the population density and sigma (2) the mean axial squared parent-offspring dispersal rate), dispersal and density were estimated directly and indirectly for this species using mark-recapture and microsatellite data, respectively, on lizards captured at a local geographical scale of 3 ha. Mark-recapture data gave a dispersal rate of 843 m(2)/generation (assuming a generation time of 6.5 years), a time-scaled density of 13 635 individuals * generation/km(2) and, hence, a neighbourhood size of 144 individuals. A genetic method based on the multilocus (10 loci) microsatellite genotypes of individuals and their geographical location indicated that there is a significant isolation by distance pattern, and gave a neighbourhood size of 69 individuals, with a 95% confidence interval between 48 and 184. This translates into a dispersal rate of 404 m(2)/generation when using the mark-recapture density estimation, or an estimate of time-scaled population density of 6520 individuals * generation/km(2) when using the mark-recapture dispersal rate estimate. The relationship between the two categories of neighbourhood size, dispersal and density estimates and reasons for any disparities are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The self-diffusion coefficients for water in a series of copolymers of 2-hydroxyethyl methacrylate, HEMA, and tetrahydrofurfuryl methacrylate, THFMA, swollen with water to their equilibrium states have been studied at 310 K using PFG-NMR. The self-diffusion coefficients calculated from the Stejskal-Tanner equation, D-obs, for all of the hydrated polymers were found to be dependent on the NMR storage time, as a result of spin exchange between the proton reservoirs of the water and the polymers, reaching an equilibrium plateau value at long storage times. The true values of the diffusion coefficients were calculated from the values of D-obs, in the plateau regions by applying a correction for the fraction of water protons present, obtained from the equilibrium water contents of the gels. The true self-diffusion coefficient for water in polyHEMA obtained at 310 K by this method was 5.5 x 10(-10) m(2) s(-1). For the copolymers containing 20% HEMA or more a single value of the self-diffusion coefficient was found, which was somewhat larger than the corresponding values obtained for the macroscopic diffusion coefficient from sorption measurements. For polyTHFMA and copolymers containing less than 20% HEMA, the PFG-NMR stimulated echo attenuation decay curves and the log-attenuation plots were characteristic of the presence of two diffusing water species. The self-diffusion coefficients of water in the equilibrium-hydrated copolymers were found to be dependent on the copolymer composition, decreasing with increasing THFMA content.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two methods were compared for determining the concentration of penetrative biomass during growth of Rhizopus oligosporus on an artificial solid substrate consisting of an inert gel and starch as the sole source of carbon and energy. The first method was based on the use of a hand microtome to make sections of approximately 0.2- to 0.4-mm thickness parallel to the substrate surface and the determination of the glucosamine content in each slice. Use of glucosamine measurements to estimate biomass concentrations was shown to be problematic due to the large variations in glucosamine content with mycelial age. The second method was a novel method based on the use of confocal scanning laser microscopy to estimate the fractional volume occupied by the biomass. Although it is not simple to translate fractional volumes into dry weights of hyphae due to the lack of experimentally determined conversion factors, measurement of the fractional volumes in themselves is useful for characterizing fungal penetration into the substrate. Growth of penetrative biomass in the artificial model substrate showed two forms of growth with an indistinct mass in the region close to the substrate surface and a few hyphae penetrating perpendicularly to the surface in regions further away from the substrate surface. The biomass profiles against depth obtained from the confocal microscopy showed two linear regions on log-linear plots, which are possibly related to different oxygen availability at different depths within the substrate. Confocal microscopy has the potential to be a powerful tool in the investigation of fungal growth mechanisms in solid-state fermentation. (C) 2003 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O consumo de suco de frutas vem aumentando no Brasil. Entre 2002 e 2009 o consumo de sucos, sejam eles concentrados, em pó, sucos ou néctares, aumentou em 21%. Devido ao seu sabor agradável e doce, e ao seu valor nutricional, o suco de laranja é o suco mais comum fabricado pela indústria de processamento de bebidas. Diversos fatores podem afetar a qualidade do suco de laranja. A microbiota típica presente no suco de laranja pode ser proveniente de várias etapas de sua produção. Em relação às enzimas, a pectinametilesterase (PME) é a principal causadora de alterações em suco laranja. A pasteurização e a esterilização comercial são os métodos de conservação mais comuns utilizados para inativar enzimas e micro-organismos, porém podem causar efeitos adversos em relação às características sensoriais (cor, sabor, aroma, e outros) dos produtos. A tecnologia de ultrassom vem sendo estudada recentemente como uma forma de conservar os alimentos sem causar efeitos indesejáveis como os provocados pelos tratamentos térmicos. O objetivo deste trabalho foi avaliar a utilização da tecnologia de ultrassom e de ultrassom aliado a temperaturas brandas, como forma de conservar suco de laranja. Para isto, foram analisadas a contagem de mesófilos totais e bolores e leveduras, a atividade da pectinametilesterase, o teor de vitamina C, a cor, o pH, o teor de sólidos solúveis e a estabilidade em relação à turbidez. Ainda, avaliou-se a aceitação sensorial de suco de laranja submetido à termossonicação. Os resultados foram comparados com os obtidos para o suco natural e o suco pasteurizado. Utilizou-se um ultrassom de 40 kHz, associado às temperaturas de 25 ºC, 30 ºC, 40 ºC, 50 ºC e 60 ºC durante 10 minutos. Os tratamentos utilizando ultrassom a 50 ºC e 60 ºC foram capazes de reduzir a contagem de bolores e leveduras e de mesófilos totais, apresentando uma redução de 3 ciclos logarítmicos. Resultado similar foi encontrado quando realizado o tratamento térmico a 90 ºC por 30 segundos. Observou-se que a aplicação da termossonicação permitiu uma redução significativa na atividade de PME e uma menor perda de vitamina C. O tratamento que apresentou melhor redução na atividade de PME foi utilizando ultrassom 40 kHz com temperatura de 60 ºC. Em relação ao ácido ascórbico, quanto menor a temperatura utilizada em conjunto com a sonicação, menor foi a perda deste composto. O teor de sólidos solúveis, o pH e a cor do suco não foram alterados ao longo do processamento. Avaliando a aceitabilidade do suco, verificou-se que a cor não foi influenciada por nenhum tratamento. Em relação ao aroma, sabor e aceitação global o suco submetido a termossonicação obteve aceitação sensorial superior à encontrada para o suco pasteurizado. Concluiu-se então que a utilização da termossonicação como uma forma de conservação para suco de laranja é viável.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Survival analysis is applied when the time until the occurrence of an event is of interest. Such data are routinely collected in plant diseases, although applications of the method are uncommon. The objective of this study was to use two studies on post-harvest diseases of peaches, considering two harvests together and the existence of random effect shared by fruits of a same tree, in order to describe the main techniques in survival analysis. The nonparametric Kaplan-Meier method, the log-rank test and the semi-parametric Cox's proportional hazards model were used to estimate the effect of cultivars and the number of days after full bloom on the survival to the brown rot symptom and the instantaneous risk of expressing it in two consecutive harvests. The joint analysis with baseline effect, varying between harvests, and the confirmation of the tree effect as a grouping factor with random effect were appropriate to interpret the phenomenon (disease) evaluated and can be important tools to replace or complement the conventional analysis, respecting the nature of the variable and the phenomenon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enrofloxacin (ENR) is an antimicrobial used both in humans and in food producing species. Its control is required in farmed species and their surroundings in order to reduce the prevalence of antibiotic resistant bacteria. Thus, a new biomimetic sensor enrofloxacin is presented. An artificial host was imprinted in specific polymers. These were dispersed in 2-nitrophenyloctyl ether and entrapped in a poly(vinyl chloride) matrix. The potentiometric sensors exhibited a near-Nernstian response. Slopes expressing mVΔlog([ENR]/M) varied within 48–63. The detection limits ranged from 0.28 to 1.01 µg mL 1. Sensors were independent from the pH of test solutions within 4–7. Good selectivity was observed toward potassium, calcium, barium, magnesium, glycine, ascorbic acid, creatinine, norfloxacin, ciprofloxacin, and tetracycline. In flowing media, the biomimetic sensors presented good reproducibility (RSD of ±0.7%), fast response, good sensitivity (47 mV/Dlog([ENR]/ M), wide linear range (1.0×10-5–1.0×10-3 M), low detection limit (0.9 µg mL-1), and a stable baseline for a 5×10-2 M acetate buffer (pH 4.7) carrier. The sensors were used to analyze fish samples. The method offered the advantages of simplicity, accuracy, and automation feasibility. The sensing membrane may contribute to the development of small devices allowing in vivo measurements of enrofloxacin or parent-drugs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The consumption of natural products has become a public health problem, since these medicinal teas are prepared using natural plants without an effective hygienic and sanitary control. The aim of this study was to assess the effects of gamma radiation, on the microbial burden of two medicinal plants: Melissa officinalis and Lippia citriodora. Dried samples of the two plants were irradiated at a Co-60 experimental equipment. The applied gamma radiation doses were 1, 3, and 5 kGy at a dose rate of 1.34 kGy/h. Non-irradiated samples followed all the experiments. Bacterial and fungal counts were assessed before and after irradiation by membrane filtration method. Challenging tests with Escherichia coli were performed in order to evaluate the disinfection efficiency of gamma radiation treatment. Characterization of M. officinalis and L. citriadora microbiota indicated an average bioburden value of 102CFU/g. The inactivation studies of the bacterial mesophilic population of both dried plants pointed out to a one log reduction of microbial load after irradiation at 5 kGy. Regarding the fungal population, the initial load of 30 CFU/g was only reduced by 0.5 log by an irradiation dose of 5 kGy. The dynamics with radiation doses of plants microbial population’s phenotypes indicated the prevalence of gram-positive rods for M. officinalis before and after irradiation, and the increase of the frequency of gram-negative rods with irradiation for L. citriadora. Among fungal population of both plants, Mucor, Neoscytalidium, Aspergillus and Alternaria were the most isolated genera. The results obtained in the challenging tests with E. coli on plants pointed out to an inactivation efficiency of 99.5% and 99.9% to a dose of 2 kGy, for M.officinalis and L. citriadora, respectively. The gamma radiation treatment can be a significant tool for the microbial control in medicinal plants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The calibration is done by maximizing the likelihood of zero coupon bond log prices, using mean and covariance functions computed analytically, as well as likelihood derivatives with respect to the parameters. The maximization method used is the conjugate gradients. The only prices needed for calibration are zero coupon bond prices and the parameters are directly obtained in the arbitrage free risk neutral measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores – Sistemas Digitais e Percepcionais pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enrofloxacin (ENR) is an antimicrobial used both in humans and in food producing species. Its control is required in farmed species and their surroundings in order to reduce the prevalence of antibiotic resistant bacteria. Thus, a new biomimetic sensor enrofloxacin is presented. An artificial host was imprinted in specific polymers. These were dispersed in 2-nitrophenyloctyl ether and entrapped in a poly(vinyl chloride) matrix. The potentiometric sensors exhibited a near-Nernstian response. Slopes expressing mV/Δlog([ENR]/M) varied within 48–63. The detection limits ranged from 0.28 to 1.01 µg mL−1. Sensors were independent from the pH of test solutions within 4–7. Good selectivity was observed toward potassium, calcium, barium, magnesium, glycine, ascorbic acid, creatinine, norfloxacin, ciprofloxacin, and tetracycline. In flowing media, the biomimetic sensors presented good reproducibility (RSD of ± 0.7%), fast response, good sensitivity (47 mV/Δlog([ENR]/M), wide linear range (1.0 × 10−5–1.0 × 10−3 M), low detection limit (0.9 µg mL−1), and a stable baseline for a 5 × 10−2 M acetate buffer (pH 4.7) carrier. The sensors were used to analyze fish samples. The method offered the advantages of simplicity, accuracy, and automation feasibility. The sensing membrane may contribute to the development of small devices allowing in vivo measurements of enrofloxacin or parent-drugs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia e Gestão Industrial

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Doutor em Estatística e Gestão do Risco

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human Activity Recognition systems require objective and reliable methods that can be used in the daily routine and must offer consistent results according with the performed activities. These systems are under development and offer objective and personalized support for several applications such as the healthcare area. This thesis aims to create a framework for human activities recognition based on accelerometry signals. Some new features and techniques inspired in the audio recognition methodology are introduced in this work, namely Log Scale Power Bandwidth and the Markov Models application. The Forward Feature Selection was adopted as the feature selection algorithm in order to improve the clustering performances and limit the computational demands. This method selects the most suitable set of features for activities recognition in accelerometry from a 423th dimensional feature vector. Several Machine Learning algorithms were applied to the used accelerometry databases – FCHA and PAMAP databases - and these showed promising results in activities recognition. The developed algorithm set constitutes a mighty contribution for the development of reliable evaluation methods of movement disorders for diagnosis and treatment applications.