17 resultados para Methods of Compression
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The main goal of this special issue was to gather contributions dealing with the latest breakthrough methods for providing value compounds and energy/fuel from waste valorization. Valorization is a relatively new approach in the area of industrial wastes management, a key issue to promote sustainable development. In this field, the recovery of value-added substances, such as antioxidants, proteins, vitamins, and so forth, from the processing of agroindustrial byproducts, is worth mentioning. Another important valorization approach is the use of biogas from waste treatment plants for the production of energy. Several approaches involving physical and chemical processes, thermal and biological processes that ensure reduced emissions and energy consumptions were taken into account. The papers selected for this topical issue represent some of the mostly researched methods that currently promote the valorization of wastes to energy and useful materials ...
Resumo:
Two methods of trapping Argentine ants in natural habitats are compared. both methods are used on the boundaries of an invaded area with the goal of assessing the spread of the invasion front. Pitfall surveys take longer to obtain results than bait surveys, but bait surveys are only a “snapshot” of the moment, with less chance of detecting Argentine ant workers. significant differences are found between the methods in terms of the number of traps occupied by Argentine ants, native ants or a combination of both. Differences in the richness of native ant species are found as well, showing that pitfall surveys are necessary to assess such richness. Despite this, no differences in the assessment of spread are found between the methods. bait surveys are an easier and faster method to assess the spread of Argentine ants, spread being one of the most important characteristics of biological invasions
Resumo:
Plants constitute an excellent ecosystem for microorganisms. The environmental conditions offered differ considerably between the highly variable aerial plant part and the more stable root system. Microbes interact with plant tissues and cells with different degrees of dependence. The most interesting from the microbial ecology point of view, however, are specific interactions developed by plant-beneficial (either non-symbiotic or symbiotic) and pathogenic microorganisms. Plants, like humans and other animals, also become sick, but they have evolved a sophisticated defense response against microbes, based on a combination of constitutive and inducible responses which can be localized or spread throughout plant organs and tissues. The response is mediated by several messenger molecules that activate pathogen-responsive genes coding for enzymes or antimicrobial compounds, and produces less sophisticated and specific compounds than immunoglobulins in animals. However, the response specifically detects intracellularly a type of protein of the pathogen based on a gene-for-gene interaction recognition system, triggering a biochemical attack and programmed cell death. Several implications for the management of plant diseases are derived from knowledge of the basis of the specificity of plant-bacteria interactions. New biotechnological products are currently being developed based on stimulation of the plant defense response, and on the use of plant-beneficial bacteria for biological control of plant diseases (biopesticides) and for plant growth promotion (biofertilizers)
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods
Resumo:
Two common methods of accounting for electric-field-induced perturbations to molecular vibration are analyzed and compared. The first method is based on a perturbation-theoretic treatment and the second on a finite-field treatment. The relationship between the two, which is not immediately apparent, is made by developing an algebraic formalism for the latter. Some of the higher-order terms in this development are documented here for the first time. As well as considering vibrational dipole polarizabilities and hyperpolarizabilities, we also make mention of the vibrational Stark effec
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods.
Resumo:
In this paper, some steganalytic techniques designed to detect the existence of hidden messages using histogram shifting methods are presented. Firstly, some techniques to identify specific methods of histogram shifting, based on visible marks on the histogram or abnormal statistical distributions are suggested. Then, we present a general technique capable of detecting all histogram shifting techniques analyzed. This technique is based on the effect of histogram shifting methods on the "volatility" of the histogram of differences and the study of its reduction whenever new data are hidden.
Resumo:
This article investigates the history of land and water transformations in Matadepera, a wealthy suburb of metropolitan Barcelona. Analysis is informed by theories of political ecology and methods of environmental history; although very relevant, these have received relatively little attention within ecological economics. Empirical material includes communications from the City Archives of Matadepera (1919-1979), 17 interviews with locals born between 1913 and 1958, and an exhaustive review of grey historical literature. Existing water histories of Barcelona and its outskirts portray a battle against natural water scarcity, hard won by heroic engineers and politicians acting for the good of the community. Our research in Matadepera tells a very different story. We reveal the production of a highly uneven landscape and waterscape through fierce political and power struggles. The evolution of Matadepera from a small rural village to an elite suburb was anything but spontaneous or peaceful. It was a socio-environmental project well intended by landowning elites and heavily fought by others. The struggle for the control of water went hand in hand with the land and political struggles that culminated – and were violently resolved - in the Spanish Civil War. The displacement of the economic and environmental costs of water use from few to many continues to this day and is constitutive of Matadepera’s uneven and unsustainable landscape. By unravelling the relations of power that are inscribed in the urbanization of nature (Swyngedouw, 2004), we question the perceived wisdoms of contemporary water policy debates, particularly the notion of a natural scarcity that merits a technical or economic response. We argue that the water question is fundamentally a political question of environmental justice; it is about negotiating alternative visions of the future and deciding whose visions will be produced.
Resumo:
In this study I try to explain the systemic problem of the low economic competitiveness of nuclear energy for the production of electricity by carrying out a biophysical analysis of its production process. Given the fact that neither econometric approaches nor onedimensional methods of energy analyses are effective, I introduce the concept of biophysical explanation as a quantitative analysis capable of handling the inherent ambiguity associated with the concept of energy. In particular, the quantities of energy, considered as relevant for the assessment, can only be measured and aggregated after having agreed on a pre-analytical definition of a grammar characterizing a given set of finite transformations. Using this grammar it becomes possible to provide a biophysical explanation for the low economic competitiveness of nuclear energy in the production of electricity. When comparing the various unit operations of the process of production of electricity with nuclear energy to the analogous unit operations of the process of production of fossil energy, we see that the various phases of the process are the same. The only difference is related to characteristics of the process associated with the generation of heat which are completely different in the two systems. Since the cost of production of fossil energy provides the base line of economic competitiveness of electricity, the (lack of) economic competitiveness of the production of electricity from nuclear energy can be studied, by comparing the biophysical costs associated with the different unit operations taking place in nuclear and fossil power plants when generating process heat or net electricity. In particular, the analysis focuses on fossil-fuel requirements and labor requirements for those phases that both nuclear plants and fossil energy plants have in common: (i) mining; (ii) refining/enriching; (iii) generating heat/electricity; (iv) handling the pollution/radioactive wastes. By adopting this approach, it becomes possible to explain the systemic low economic competitiveness of nuclear energy in the production of electricity, because of: (i) its dependence on oil, limiting its possible role as a carbon-free alternative; (ii) the choices made in relation to its fuel cycle, especially whether it includes reprocessing operations or not; (iii) the unavoidable uncertainty in the definition of the characteristics of its process; (iv) its large inertia (lack of flexibility) due to issues of time scale; and (v) its low power level.
Resumo:
Tropical cyclones are affected by a large number of climatic factors, which translates into complex patterns of occurrence. The variability of annual metrics of tropical-cyclone activity has been intensively studied, in particular since the sudden activation of the North Atlantic in the mid 1990’s. We provide first a swift overview on previous work by diverse authors about these annual metrics for the North-Atlantic basin, where the natural variability of the phenomenon, the existence of trends, the drawbacks of the records, and the influence of global warming have been the subject of interesting debates. Next, we present an alternative approach that does not focus on seasonal features but on the characteristics of single events [Corral et al., Nature Phys. 6, 693 (2010)]. It is argued that the individual-storm power dissipation index (PDI) constitutes a natural way to describe each event, and further, that the PDI statistics yields a robust law for the occurrence of tropical cyclones in terms of a power law. In this context, methods of fitting these distributions are discussed. As an important extension to this work we introduce a distribution function that models the whole range of the PDI density (excluding incompleteness effects at the smallest values), the gamma distribution, consisting in a powerlaw with an exponential decay at the tail. The characteristic scale of this decay, represented by the cutoff parameter, provides very valuable information on the finiteness size of the basin, via the largest values of the PDIs that the basin can sustain. We use the gamma fit to evaluate the influence of sea surface temperature (SST) on the occurrence of extreme PDI values, for which we find an increase around 50 % in the values of these basin-wide events for a 0.49 C SST average difference. Similar findings are observed for the effects of the positive phase of the Atlantic multidecadal oscillation and the number of hurricanes in a season on the PDI distribution. In the case of the El Niño Southern oscillation (ENSO), positive and negative values of the multivariate ENSO index do not have a significant effect on the PDI distribution; however, when only extreme values of the index are used, it is found that the presence of El Niño decreases the PDI of the most extreme hurricanes.
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan
Resumo:
The literature related to skew–normal distributions has grown rapidly in recent yearsbut at the moment few applications concern the description of natural phenomena withthis type of probability models, as well as the interpretation of their parameters. Theskew–normal distributions family represents an extension of the normal family to whicha parameter (λ) has been added to regulate the skewness. The development of this theoreticalfield has followed the general tendency in Statistics towards more flexible methodsto represent features of the data, as adequately as possible, and to reduce unrealisticassumptions as the normality that underlies most methods of univariate and multivariateanalysis. In this paper an investigation on the shape of the frequency distribution of thelogratio ln(Cl−/Na+) whose components are related to waters composition for 26 wells,has been performed. Samples have been collected around the active center of Vulcanoisland (Aeolian archipelago, southern Italy) from 1977 up to now at time intervals ofabout six months. Data of the logratio have been tentatively modeled by evaluating theperformance of the skew–normal model for each well. Values of the λ parameter havebeen compared by considering temperature and spatial position of the sampling points.Preliminary results indicate that changes in λ values can be related to the nature ofenvironmental processes affecting the data
Resumo:
A study was conducted on the methods of basis set superposition error (BSSE)-free geometry optimization and frequency calculations in clusters larger than a dimer. In particular, three different counterpoise schemes were critically examined. It was shown that the counterpoise-corrected supermolecule energy can be easily obtained in all the cases by using the many-body partitioning of energy
Resumo:
Conventional methods of gene prediction rely on the recognition of DNA-sequence signals, the coding potential or the comparison of a genomic sequence with a cDNA, EST, or protein database. Reasons for limited accuracy in many circumstances are species-specific training and the incompleteness of reference databases. Lately, comparative genome analysis has attracted increasing attention. Several analysis tools that are based on human/mouse comparisons are already available. Here, we present a program for the prediction of protein-coding genes, termed SGP-1 (Syntenic Gene Prediction), which is based on the similarity of homologous genomic sequences. In contrast to most existing tools, the accuracy of SGP-1 depends little on species-specific properties such as codon usage or the nucleotide distribution. SGP-1 may therefore be applied to nonstandard model organisms in vertebrates as well as in plants, without the need for extensive parameter training. In addition to predicting genes in large-scale genomic sequences, the program may be useful to validate gene structure annotations from databases. To this end, SGP-1 output also contains comparisons between predicted and annotated gene structures in HTML format. The program can be accessed via a Web server at http://soft.ice.mpg.de/sgp-1. The source code, written in ANSI C, is available on request from the authors.
Resumo:
In this paper we argue that inventory models are probably not usefulmodels of household money demand because the majority of households does nothold any interest bearing assets. The relevant decision for most people is notthe fraction of assets to be held in interest bearing form, but whether to holdany of such assets at all. The implications of this realization are interesting and important. We find that(a) the elasticity of money demand is very small when the interest rate is small,(b) the probability that a household holds any amount of interest bearing assetsis positively related to the level of financial assets, and (c) the cost ofadopting financial technologies is positively related to age and negatively relatedto the level of education. Unlike the traditional methods of money demand estimation, our methodology allowsfor the estimation of the interest--elasticity at low values of the nominalinterest rate. The finding that the elasticity is very small for interest ratesbelow 5 percent suggests that the welfare costs of inflation are small. At interest rates of 6 percent, the elasticity is close to 0.5. We find thatroughly one half of this elasticity can be attributed to the Baumol--Tobin orintensive margin and half of it can be attributed to the new adopters or extensivemargin. The intensive margin is less important at lower interest rates and moreimportant at higher interest rates.