910 resultados para Fractional dinars


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The chemical composition and fractional distribution of protein isolates prepared from species of Mucuna bean were studied. Using six different extraction media, the yield of protein based on the Kjeldahl procedure varied from 8% to 34%, and the protein content varied from 75% to 95%. When the yields were high, the colour of the isolates generally tended to be dark and unsatisfactory. Hence, the use of chemical treatments and high pressure processing were explored. The solubility maxima for the protein isolates in water were found to occur at pH values of 2.0 and 11.0, while the pH corresponding to minimum solubility (i.e. isoelectric region) occurred at pH values of 4.0 and 5.0. The total essential amino acid in the isolates ranged from 495 to 557 mg g(-1) protein, which compares favourably with the recommended level for pre-school and school children. Methionine and cysteine were the limiting amino acids. A key nutritional attribute of the protein isolates was its high lysine content. The isolate can therefore complement cereal-based foods which are deficient in lysine. The proteins mainly consisted of albumins, glutelins and globulins. Prolamins were only present in trace concentration (< 0.3%). Gel filtration chromatograms of the isolates indicated the presence of major protein fractions with molecular weights of 40 and 15 kDa, while gel electrophoresis (SDS-PAGE) indicated a major broad zone with molecular weights of 36 +/- 7 and 17.3 +/- 13 kDa. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper identifies the major challenges in the area of pattern formation. The work is also motivated by the need for development of a single framework to surmount these challenges. A framework based on the control of macroscopic parameters is proposed. The issue of transformation of patterns is specifically considered. A definition for transformation and four special cases, namely elementary and geometrical transformations by repositioning all or some robots in the pattern are provided. Two feasible tools for pattern transformation namely, a macroscopic parameter method and a mathematical tool - Moebius transformation also known as the linear fractional transformation are introduced. The realization of the unifying framework considering planning and communication is reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: This paper presents a detailed study of fractal-based methods for texture characterization of mammographic mass lesions and architectural distortion. The purpose of this study is to explore the use of fractal and lacunarity analysis for the characterization and classification of both tumor lesions and normal breast parenchyma in mammography. Materials and methods: We conducted comparative evaluations of five popular fractal dimension estimation methods for the characterization of the texture of mass lesions and architectural distortion. We applied the concept of lacunarity to the description of the spatial distribution of the pixel intensities in mammographic images. These methods were tested with a set of 57 breast masses and 60 normal breast parenchyma (dataset1), and with another set of 19 architectural distortions and 41 normal breast parenchyma (dataset2). Support vector machines (SVM) were used as a pattern classification method for tumor classification. Results: Experimental results showed that the fractal dimension of region of interest (ROIs) depicting mass lesions and architectural distortion was statistically significantly lower than that of normal breast parenchyma for all five methods. Receiver operating characteristic (ROC) analysis showed that fractional Brownian motion (FBM) method generated the highest area under ROC curve (A z = 0.839 for dataset1, 0.828 for dataset2, respectively) among five methods for both datasets. Lacunarity analysis showed that the ROIs depicting mass lesions and architectural distortion had higher lacunarities than those of ROIs depicting normal breast parenchyma. The combination of FBM fractal dimension and lacunarity yielded the highest A z value (0.903 and 0.875, respectively) than those based on single feature alone for both given datasets. The application of the SVM improved the performance of the fractal-based features in differentiating tumor lesions from normal breast parenchyma by generating higher A z value. Conclusion: FBM texture model is the most appropriate model for characterizing mammographic images due to self-affinity assumption of the method being a better approximation. Lacunarity is an effective counterpart measure of the fractal dimension in texture feature extraction in mammographic images. The classification results obtained in this work suggest that the SVM is an effective method with great potential for classification in mammographic image analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current study aims to assess the applicability of direct or indirect normalization for the analysis of fractional anisotropy (FA) maps in the context of diffusion-weighted images (DWIs) contaminated by ghosting artifacts. We found that FA maps acquired by direct normalization showed generally higher anisotropy than indirect normalization, and the disparities were aggravated by the presence of ghosting artifacts in DWIs. The voxel-wise statistical comparisons demonstrated that indirect normalization reduced the influence of artifacts and enhanced the sensitivity of detecting anisotropy differences between groups. This suggested that images contaminated with ghosting artifacts can be sensibly analyzed using indirect normalization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two different ways of performing low-energy electron diffraction (LEED) structure determinations for the p(2 x 2) structure of oxygen on Ni {111} are compared: a conventional LEED-IV structure analysis using integer and fractional-order IV-curves collected at normal incidence and an analysis using only integer-order IV-curves collected at three different angles of incidence. A clear discrimination between different adsorption sites can be achieved by the latter approach as well as the first and the best fit structures of both analyses are within each other's error bars (all less than 0.1 angstrom). The conventional analysis is more sensitive to the adsorbate coordinates and lateral parameters of the substrate atoms whereas the integer-order-based analysis is more sensitive to the vertical coordinates of substrate atoms. Adsorbate-related contributions to the intensities of integer-order diffraction spots are independent of the state of long-range order in the adsorbate layer. These results show, therefore, that for lattice-gas disordered adsorbate layers, for which only integer-order spots are observed, similar accuracy and reliability can be achieved as for ordered adsorbate layers, provided the data set is large enough.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an application of cavity-enhanced absorption spectroscopy with an off-axis alignment of the cavity formed by two spherical mirrors and with time integration of the cavity-output intensity for detection of nitrogen dioxide (NO2) and iodine monoxide (IO) radicals using a violet laser diode at lambda = 404.278 nm. A noise-equivalent (1sigma = root-mean-square variation of the signal) fractional absorption for one optical pass of 4.5x10(-8) was demonstrated with a mirror reflectivity of similar to0.99925, a cavity length of 0.22 m and a lock-in-amplifier time constant of 3 s. Noise-equivalent detection sensitivities towards nitrogen dioxide of 1.8x10(10) molecule cm(-3) and towards the IO radical of 3.3x10(9) molecule cm(-3) were achieved in flow tubes with an inner diameter of 4 cm for a lock-in-amplifier time constant of 3 s. Alkyl peroxy radicals were detected using chemical titration with excess nitric oxide (RO2 + NO --> RO + NO2). Measurement of oxygen-atom concentrations was accomplished by determining the depletion of NO2 in the reaction NO2 + O --> NO + O-2. Noise-equivalent concentrations of alkyl peroxy radicals and oxygen atoms were 3x10(10) molecule cm(-3) in the discharge-flow-tube experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The extraction of design data for the lowpass dielectric multilayer according to Tschebysheff performance is described. The extraction proceeds initially by analogy with electric-circuit design, and can then be given numerical refinement which is also described. Agreement with the Tschebysheff desideratum is satisfactory. The multilayers extracted by this procedure are of fractional thickness, symmetric with regard to their central layers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Temperature is one of the most prominent environmental factors that determine plant growth, devel- opment, and yield. Cool and moist conditions are most favorable for wheat. Wheat is likely to be highly vulnerable to further warming because currently the temperature is already close to or above optimum. In this study, the impacts of warming and extreme high temperature stress on wheat yield over China were investigated by using the general large area model (GLAM) for annual crops. The results showed that each 1±C rise in daily mean temperature would reduce the average wheat yield in China by about 4.6%{5.7% mainly due to the shorter growth duration, except for a small increase in yield at some grid cells. When the maximum temperature exceeded 30.5±C, the simulated grain-set fraction declined from 1 at 30.5±C to close to 0 at about 36±C. When the total grain-set was lower than the critical fractional grain-set (0.575{0.6), harvest index and potential grain yield were reduced. In order to reduce the negative impacts of warming, it is crucial to take serious actions to adapt to the climate change, for example, by shifting sowing date, adjusting crop distribution and structure, breeding heat-resistant varieties, and improving the monitoring, forecasting, and early warning of extreme climate events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Searching for the optimum tap-length that best balances the complexity and steady-state performance of an adaptive filter has attracted attention recently. Among existing algorithms that can be found in the literature, two of which, namely the segmented filter (SF) and gradient descent (GD) algorithms, are of particular interest as they can search for the optimum tap-length quickly. In this paper, at first, we carefully compare the SF and GD algorithms and show that the two algorithms are equivalent in performance under some constraints, but each has advantages/disadvantages relative to the other. Then, we propose an improved variable tap-length algorithm using the concept of the pseudo fractional tap-length (FT). Updating the tap-length with instantaneous errors in a style similar to that used in the stochastic gradient [or least mean squares (LMS)] algorithm, the proposed FT algorithm not only retains the advantages from both the SF and the GD algorithms but also has significantly less complexity than existing algorithms. Both performance analysis and numerical simulations are given to verify the new proposed algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The tap-length, or the number of the taps, is an important structural parameter of the linear MMSE adaptive filter. Although the optimum tap-length that balances performance and complexity varies with scenarios, most current adaptive filters fix the tap-length at some compromise value, making them inefficient to implement especially in time-varying scenarios. A novel gradient search based variable tap-length algorithm is proposed, using the concept of the pseudo-fractional tap-length, and it is shown that the new algorithm can converge to the optimum tap-length in the mean. Results of computer simulations are also provided to verify the analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The intensity and distribution of daily precipitation is predicted to change under scenarios of increased greenhouse gases (GHGs). In this paper, we analyse the ability of HadCM2, a general circulation model (GCM), and a high-resolution regional climate model (RCM), both developed at the Met Office's Hadley Centre, to simulate extreme daily precipitation by reference to observations. A detailed analysis of daily precipitation is made at two UK grid boxes, where probabilities of reaching daily thresholds in the GCM and RCM are compared with observations. We find that the RCM generally overpredicts probabilities of extreme daily precipitation but that, when the GCM and RCM simulated values are scaled to have the same mean as the observations, the RCM captures the upper-tail distribution more realistically. To compare regional changes in daily precipitation in the GHG-forced period 2080-2100 in the GCM and the RCM, we develop two methods. The first considers the fractional changes in probability of local daily precipitation reaching or exceeding a fixed 15 mm threshold in the anomaly climate compared with the control. The second method uses the upper one-percentile of the control at each point as the threshold. Agreement between the models is better in both seasons with the latter method, which we suggest may be more useful when considering larger scale spatial changes. On average, the probability of precipitation exceeding the 1% threshold increases by a factor of 2.5 (GCM and RCM) in winter and by I .7 (GCM) or 1.3 (RCM) in summer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The vertical structure of the relationship between water vapor and precipitation is analyzed in 5 yr of radiosonde and precipitation gauge data from the Nauru Atmospheric Radiation Measurement (ARM) site. The first vertical principal component of specific humidity is very highly correlated with column water vapor (CWV) and has a maximum of both total and fractional variance captured in the lower free troposphere (around 800 hPa). Moisture profiles conditionally averaged on precipitation show a strong association between rainfall and moisture variability in the free troposphere and little boundary layer variability. A sharp pickup in precipitation occurs near a critical value of CWV, confirming satellite-based studies. A lag–lead analysis suggests it is unlikely that the increase in water vapor is just a result of the falling precipitation. To investigate mechanisms for the CWV–precipitation relationship, entraining plume buoyancy is examined in sonde data and simplified cases. For several different mixing schemes, higher CWV results in progressively greater plume buoyancies, particularly in the upper troposphere, indicating conditions favorable for deep convection. All other things being equal, higher values of lower-tropospheric humidity, via entrainment, play a major role in this buoyancy increase. A small but significant increase in subcloud layer moisture with increasing CWV also contributes to buoyancy. Entrainment coefficients inversely proportional to distance from the surface, associated with mass flux increase through a deep lower-tropospheric layer, appear promising. These yield a relatively even weighting through the lower troposphere for the contribution of environmental water vapor to midtropospheric buoyancy, explaining the association of CWV and buoyancy available for deep convection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate a simplified form of variational data assimilation in a fully nonlinear framework with the aim of extracting dynamical development information from a sequence of observations over time. Information on the vertical wind profile, w(z ), and profiles of temperature, T (z , t), and total water content, qt (z , t), as functions of height, z , and time, t, are converted to brightness temperatures at a single horizontal location by defining a two-dimensional (vertical and time) variational assimilation testbed. The profiles of T and qt are updated using a vertical advection scheme. A basic cloud scheme is used to obtain the fractional cloud amount and, when combined with the temperature field, this information is converted into a brightness temperature, using a simple radiative transfer scheme. It is shown that our model exhibits realistic behaviour with regard to the prediction of cloud, but the effects of nonlinearity become non-negligible in the variational data assimilation algorithm. A careful analysis of the application of the data assimilation scheme to this nonlinear problem is presented, the salient difficulties are highlighted, and suggestions for further developments are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud imagery is not currently used in numerical weather prediction (NWP) to extract the type of dynamical information that experienced forecasters have extracted subjectively for many years. For example, rapidly developing mid-latitude cyclones have characteristic signatures in the cloud imagery that are most fully appreciated from a sequence of images rather than from a single image. The Met Office is currently developing a technique to extract dynamical development information from satellite imagery using their full incremental 4D-Var (four-dimensional variational data assimilation) system. We investigate a simplified form of this technique in a fully nonlinear framework. We convert information on the vertical wind field, w(z), and profiles of temperature, T(z, t), and total water content, qt (z, t), as functions of height, z, and time, t, to a single brightness temperature by defining a 2D (vertical and time) variational assimilation testbed. The profiles of w, T and qt are updated using a simple vertical advection scheme. We define a basic cloud scheme to obtain the fractional cloud amount and, when combined with the temperature field, we convert this information into a brightness temperature, having developed a simple radiative transfer scheme. With the exception of some matrix inversion routines, all our code is developed from scratch. Throughout the development process we test all aspects of our 2D assimilation system, and then run identical twin experiments to try and recover information on the vertical velocity, from a sequence of observations of brightness temperature. This thesis contains a comprehensive description of our nonlinear models and assimilation system, and the first experimental results.