938 resultados para Size-scale effects
Resumo:
区域土壤侵蚀调查制图和动态分析,是国家和省区编制土壤侵蚀宏观规划的数据基础,也是一个重大前沿性研究命题。在综述国内外区域土壤侵蚀调查制图、区域土壤侵蚀因子、土壤侵蚀尺度效应和土壤侵蚀模型等研究现状的基础上,对即将开展的全国土壤侵蚀普查工作提出以下建议:土壤侵蚀普查需要充分利用我国土壤侵蚀模型研究的最新成果,采用模拟计算方法实现对土壤侵蚀强度的定量估算;调查内容应包括区域土壤侵蚀因子、土壤侵蚀类型与强度、水土保持措施、典型区域抽样调查等;对土壤侵蚀尺度效应、区域土壤侵蚀模型应用示范和区域土壤侵蚀数据库建设方法等关键技术展开攻关研究。
Resumo:
The nanocrystalline Gd2O3:Eu3+ powders with cubic phase were prepared by a combustion method in the presence of urea and glycol. The effects of the annealing temperature on the crystallization and luminescence properties were studied. The results of XRD show pure phase can be obtained, the average crystallite size could be calculated as 7, 8, 45, and 23 run for the precursor and samples annealed at 600, 700 and 800 degrees C, respectively, which coincided with the results from TEM images. The emission intensity, host absorption and charge transfer band intensity increased with increasing the temperature. The slightly broad emission peak at 610 nm for smaller particles can be observed. The ratio of host absorption to O-2-Eu3+ charge transfer band of smaller nanoparticles is much stronger compared with that for larger nanoparticles, furthermore, the luminescence lifetimes of nanoparticles increased with increasing particles size. The effects of doping concentration of Eu3+ on luminescence lifetimes and intensities were also discussed. The samples exhibited a higher quenching concentration of Eu3+, and luminescence lifetimes of nanoparticles are related to annealing temperature of samples and the doping concentration of Eu3+ ions.
Resumo:
The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.
Resumo:
We study the structural effects produced by the quantization of vibrational degrees of freedom in periodic crystals at zero temperature. To this end we introduce a methodology based on mapping a suitable subspace of the vibrational manifold and solving the Schrödinger equation in it. A number of increasingly accurate approximations ranging from the quasiharmonic approximation (QHA) to the vibrational self-consistent field (VSCF) method and the exact solution are described. A thorough analysis of the approximations is presented for model monatomic and hydrogen-bonded chains, and results are presented for a linear H-F chain where the potential-energy surface is obtained via first-principles electronic structure calculations. We focus on quantum nuclear effects on the lattice constant and show that the VSCF is an excellent approximation, meaning that correlation between modes is not extremely important. The QHA is excellent for covalently bonded mildly anharmonic systems, but it fails for hydrogen-bonded ones. In the latter, the zero-point energy exhibits a nonanalytic behavior at the lattice constant where the H atoms center, which leads to a spurious secondary minimum in the quantum-corrected energy curve. An inexpensive anharmonic approximation of noninteracting modes appears to produce rather good results for hydrogen-bonded chains for small system sizes. However, it converges to the incorrect QHA results for increasing size. Isotope effects are studied for the first-principles H-F chain. We show how the lattice constant and the H-F distance increase with decreasing mass and how the QHA proves to be insufficient to reproduce this behavior.
Resumo:
The spatial distribution of a species can be characterized at many different spatial scales, from fine-scale measures of local population density to coarse-scale geographical-range structure. Previous studies have shown a degree of correlation in species' distribution patterns across narrow ranges of scales, making it possible to predict fine-scale properties from coarser-scale distributions. To test the limits of such extrapolation, we have compiled distributional information on 16 species of British plants, at scales ranging across six orders of magnitude in linear resolution (1 in to 100 km). As expected, the correlation between patterns at different spatial scales tends to degrade as the scales become more widely separated. There is, however, an abrupt breakdown in cross-scale correlations across intermediate (ca. 0.5 km) scales, suggesting that local and regional patterns are influenced by essentially non-overlapping sets of processes. The scaling discontinuity may also reflect characteristic scales of human land use in Britain, suggesting a novel method for analysing the 'footprint' of humanity on a landscape.
Resumo:
Dissertação de mestrado, Biologia Marinha, Faculdade de Ciências e Tecnologia, Univerisdade do Algarve, 2015
Resumo:
The article engages with theory about the processes of spatialization of fear in contemporary Western urban space (fortification, privatization, exclusion/seclusion, fragmentation, polarization) and their relation to fear of crime and violence. A threefold taxonomy is outlined (Enclosure, Post-Public Space, Barrier), and “spaces of fear” in the city of Palermo are mapped with the aim of exploring the cumulative large-scale effects of the spatialization of fear on a concrete urban territory. Building on empirical evidence, the author suggests that mainstream theories be reframed as part of a less hegemonic and more discursive approach and that theories mainly based on the analyses of global cities be deprovincialized. The author argues for the deconstruction of the concept of “spaces of fear” in favor of the more discursive concept of “fearscapes” to describe the growing landscapes of fear in contemporary Western cities.
Resumo:
We report the characterisation of 27 cardiovascular-related traits in 23 inbred mouse strains. Mice were phenotyped either in response to chronic administration of a single dose of the beta-adrenergic receptor blocker atenolol or under a low and a high dose of the beta-agonist isoproterenol and compared to baseline condition. The robustness of our data is supported by high trait heritabilities (typically H(2)>0.7) and significant correlations of trait values measured in baseline condition with independent multistrain datasets of the Mouse Phenome Database. We then focused on the drug-, dose-, and strain-specific responses to beta-stimulation and beta-blockade of a selection of traits including heart rate, systolic blood pressure, cardiac weight indices, ECG parameters and body weight. Because of the wealth of data accumulated, we applied integrative analyses such as comprehensive bi-clustering to investigate the structure of the response across the different phenotypes, strains and experimental conditions. Information extracted from these analyses is discussed in terms of novelty and biological implications. For example, we observe that traits related to ventricular weight in most strains respond only to the high dose of isoproterenol, while heart rate and atrial weight are already affected by the low dose. Finally, we observe little concordance between strain similarity based on the phenotypes and genotypic relatedness computed from genomic SNP profiles. This indicates that cardiovascular phenotypes are unlikely to segregate according to global phylogeny, but rather be governed by smaller, local differences in the genetic architecture of the various strains.
Resumo:
Nutrigenetics and personalised nutrition are components of the concept that in the future genotyping will be used as a means of defining dietary recommendations to suit the individual. Over the last two decades there has been an explosion of research in this area, with often conflicting findings reported in the literature. Reviews of the literature in the area of apoE genotype and cardiovascular health, apoA5 genotype and postprandial lipaemia and perilipin and adiposity are used to demonstrate the complexities of genotype-phenotype associations and the aetiology of apparent between-study inconsistencies in the significance and size of effects. Furthermore, genetic research currently often takes a very reductionist approach, examining the interactions between individual genotypes and individual disease biomarkers and how they are modified by isolated dietary components or foods. Each individual possesses potentially hundreds of 'at-risk' gene variants and consumes a highly-complex diet. In order for nutrigenetics to become a useful public health tool, there is a great need to use mathematical and bioinformatic tools to develop strategies to examine the combined impact of multiple gene variants on a range of health outcomes and establish how these associations can be modified using combined dietary strategies.
Resumo:
Relating the measurable, large scale, effects of anaesthetic agents to their molecular and cellular targets of action is necessary to better understand the principles by which they affect behavior, as well as enabling the design and evaluation of more effective agents and the better clinical monitoring of existing and future drugs. Volatile and intravenous general anaesthetic agents (GAs) are now known to exert their effects on a variety of protein targets, the most important of which seem to be the neuronal ion channels. It is hence unlikely that anaesthetic effect is the result of a unitary mechanism at the single cell level. However, by altering the behavior of ion channels GAs are believed to change the overall dynamics of distributed networks of neurons. This disruption of regular network activity can be hypothesized to cause the hypnotic and analgesic effects of GAs and may well present more stereotypical characteristics than its underlying microscopic causes. Nevertheless, there have been surprisingly few theories that have attempted to integrate, in a quantitative manner, the empirically well documented alterations in neuronal ion channel behavior with the corresponding macroscopic effects. Here we outline one such approach, and show that a range of well documented effects of anaesthetics on the electroencephalogram (EEG) may be putatively accounted for. In particular we parameterize, on the basis of detailed empirical data, the effects of halogenated volatile ethers (a clinically widely used class of general anaesthetic agent). The resulting model is able to provisionally account for a range of anaesthetically induced EEG phenomena that include EEG slowing, biphasic changes in EEG power, and the dose dependent appearance of anomalous ictal activity, as well as providing a basis for novel approaches to monitoring brain function in both health and disease.
Resumo:
This Paper Tackles the Problem of Aggregate Tfp Measurement Using Stochastic Frontier Analysis (Sfa). Data From Penn World Table 6.1 are Used to Estimate a World Production Frontier For a Sample of 75 Countries Over a Long Period (1950-2000) Taking Advantage of the Model Offered By Battese and Coelli (1992). We Also Apply the Decomposition of Tfp Suggested By Bauer (1990) and Kumbhakar (2000) to a Smaller Sample of 36 Countries Over the Period 1970-2000 in Order to Evaluate the Effects of Changes in Efficiency (Technical and Allocative), Scale Effects and Technical Change. This Allows Us to Analyze the Role of Productivity and Its Components in Economic Growth of Developed and Developing Nations in Addition to the Importance of Factor Accumulation. Although not Much Explored in the Study of Economic Growth, Frontier Techniques Seem to Be of Particular Interest For That Purpose Since the Separation of Efficiency Effects and Technical Change Has a Direct Interpretation in Terms of the Catch-Up Debate. The Estimated Technical Efficiency Scores Reveal the Efficiency of Nations in the Production of Non Tradable Goods Since the Gdp Series Used is Ppp-Adjusted. We Also Provide a Second Set of Efficiency Scores Corrected in Order to Reveal Efficiency in the Production of Tradable Goods and Rank Them. When Compared to the Rankings of Productivity Indexes Offered By Non-Frontier Studies of Hall and Jones (1996) and Islam (1995) Our Ranking Shows a Somewhat More Intuitive Order of Countries. Rankings of the Technical Change and Scale Effects Components of Tfp Change are Also Very Intuitive. We Also Show That Productivity is Responsible For Virtually All the Differences of Performance Between Developed and Developing Countries in Terms of Rates of Growth of Income Per Worker. More Important, We Find That Changes in Allocative Efficiency Play a Crucial Role in Explaining Differences in the Productivity of Developed and Developing Nations, Even Larger Than the One Played By the Technology Gap
A evolução da produtividade total de fatores na economia brasileira: uma análise do período pós-real
Resumo:
A presente pesquisa aplica o modelo de fronteira estocástica de produção para as indústrias de transformação e da construção civil, assim como para o comércio e os serviços no Brasil, de forma a identificar as fontes de crescimento dos principais setores de atividade da economia brasileira, quais sejam: acumulação de capital físico, emprego da mão-de-obra, e produtividade total de fatores (PTF). Conforme Kumbhakar (2000), a evolução da PTF é decomposta em progresso técnico, mudanças da eficiência técnica, mudanças da eficiência alocativa e efeitos de escala. O estudo parte de dados de 1996 a 2000 das principais pesquisas anuais do IBGE realizadas com firmas: PAIC, PIA, PAC e PAS.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Electrospinning has become a widely implemented technique for the generation of nonwoven mats that are useful in tissue engineering and filter applications. The overriding factor that has contributed to the popularity of this method is the ease with which fibers with submicron diameters can be produced. Fibers on that size scale are comparable to protein filaments that are observed in the extracellular matrix. The apparatus and procedures for conducting electrospinning experiments are ostensibly simple. While it is rarely reported in the literature on this topic, any experience with this method of fiber spinning reveals substantial ambiguities in how the process can be controlled to generate reproducible results. The simplicity of the procedure belies the complexity of the physical processes that determine the electrospinning process dynamics. In this article, three process domains and the physical domain of charge interaction are identified as important in electrospinning: (a) creation of charge carriers, (b) charge transport, (c) residual charge. The initial event that enables electrospinning is the generation of region of excess charge in the fluid that is to be electrospun. The electrostatic forces that develop on this region of charged fluid in the presence of a high potential result in the ejection of a fluid jet that solidifies into the resulting fiber. The transport of charge from the charge solution to the grounded collection device produces some of the current which is observed. That transport can occur by the fluid jet and through the atmosphere surrounding the electrospinning apparatus. Charges that are created in the fluid that are not dissipated remain in the solidified fiber as residual charges. The physics of each of these domains in the electrospinning process is summarized in terms of the current understanding, and possible sources of ambiguity in the implementation of this technique are indicated. Directions for future research to further articulate the behavior of the electrospinning process are suggested. (C) 2012 American Institute of Physics. [doi: 10.1063/1.3682464]
Resumo:
This thesis is based on three main studies, all dealing with structure-property investigation of semicrystalline polyolefin-based composites. Low density poly(ethylene) (LDPE) and isotactic poly(propylene) (iPP) were chosen as parts of the composites materials and they were investigated either separately (as homoploymers), either in blend systems with the composition LDPE/iPP 80/20 or as filled matrix with layered silicate (montmorillonite). The beneficial influence of adding ethylene-co-propylene polymer of amorphous nature, to low density poly(ethylene)/isotactic poly(propylene) (80/20) blend is demonstrated. This effect is expressed by the major improvement of mechanical properties of ternary blends as examined at a macroscopic size scale by means of tensile measurements. The structure investigation also reveals a clear dependence of the morphology on adding ethylene-copropylene polymer. Both the nature and the content of ethylene-co-propylene polymer affect structure and properties. It is further demonstrated that the extent of improvement in mechanical properties is to be related to the molecular details of the compatibilizer. Combination of high molecular weight and high ethylene content is appropriate for the studied system where the poly(ethylene) plays the role of matrix. A new way to characterize semicrystalline systems by means of Brillouin spectroscopy is presented in this study. By this method based on inelastic light scattering, we were able to measure the high frequency elastic constant (c11) of the two microphases in the case where the spherulites size is exhibit size larger than the size of the probing phonon wavelength. In this considered case, the sample film is inhomogeneous over the relevant length scales and there is an access to the transverse phonon in the crystalline phase yielding the elastic constant c44 as well. Isotactic poly(propylene) is well suited for this type of investigation since its morphology can be tailored through different thermal treatment from the melt. Two distinctly different types of films were used; quenched (low crystallinity) and annealed (high crystallinity). The Brillouin scattering data are discussed with respect to the spherulites size, lamellae thickness, long period, crystallinity degree and well documented by AFM images. The structure and the properties of isotactic poly(propylene) matrix modified by inorganic layered silicate, montmorillonite, are discussed with respect to the clay content. Isotactic poly(propylene)-graft-maleic anhydride was used as compatibilizer. It is clearly demonstrated that the property enhancement is largely due to the ability of layered silicate to exfoliate. The intimate dispersion of the nanometer-thick silicate result from a delicate balance of the content ratio between the isotactic poly(propylene)-graft-maleic anhydride compatibilizer and the inorganic clay.