882 resultados para analysis to synthesis
Resumo:
A number of urban land-surface models have been developed in recent years to satisfy the growing requirements for urban weather and climate interactions and prediction. These models vary considerably in their complexity and the processes that they represent. Although the models have been evaluated, the observational datasets have typically been of short duration and so are not suitable to assess the performance over the seasonal cycle. The First International Urban Land-Surface Model comparison used an observational dataset that spanned a period greater than a year, which enables an analysis over the seasonal cycle, whilst the variety of models that took part in the comparison allows the analysis to include a full range of model complexity. The results show that, in general, urban models do capture the seasonal cycle for each of the surface fluxes, but have larger errors in the summer months than in the winter. The net all-wave radiation has the smallest errors at all times of the year but with a negative bias. The latent heat flux and the net storage heat flux are also underestimated, whereas the sensible heat flux generally has a positive bias throughout the seasonal cycle. A representation of vegetation is a necessary, but not sufficient, condition for modelling the latent heat flux and associated sensible heat flux at all times of the year. Models that include a temporal variation in anthropogenic heat flux show some increased skill in the sensible heat flux at night during the winter, although their daytime values are consistently overestimated at all times of the year. Models that use the net all-wave radiation to determine the net storage heat flux have the best agreement with observed values of this flux during the daytime in summer, but perform worse during the winter months. The latter could result from a bias of summer periods in the observational datasets used to derive the relations with net all-wave radiation. Apart from these models, all of the other model categories considered in the analysis result in a mean net storage heat flux that is close to zero throughout the seasonal cycle, which is not seen in the observations. Models with a simple treatment of the physical processes generally perform at least as well as models with greater complexity.
Resumo:
This chapter applies rigorous statistical analysis to existing datasets of medieval exchange rates quoted in merchants’ letters sent from Barcelona, Bruges and Venice between 1380 and 1310, which survive in the archive of Francesco di Marco Datini of Prato. First, it tests the exchange rates for stationarity. Second, it uses regression analysis to examine the seasonality of exchange rates at the three financial centres and compares them against contemporary descriptions by the merchant Giovanni di Antonio da Uzzano. Third, it tests for structural breaks in the exchange rate series.
Resumo:
Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.
Resumo:
Various studies have indicated a relationship between enteric methane (CH4) production and milk fatty acid (FA) profiles of dairy cattle. However, the number of studies investigating such a relationship is limited and the direct relationships reported are mainly obtained by variation in CH4 production and milk FA concentration induced by dietary lipid supplements. The aim of this study was to perform a meta-analysis to quantify relationships between CH4 yield (per unit of feed and unit of milk) and milk FA profile in dairy cattle and to develop equations to predict CH4 yield based on milk FA profile of cows fed a wide variety of diets. Data from 8 experiments encompassing 30 different dietary treatments and 146 observations were included. Yield of CH4 measured in these experiments was 21.5 ± 2.46 g/kg of dry matter intake (DMI) and 13.9 ± 2.30 g/ kg of fat- and protein-corrected milk (FPCM). Correlation coefficients were chosen as effect size of the relationship between CH4 yield and individual milk FA concentration (g/100 g of FA). Average true correlation coefficients were estimated by a random-effects model. Milk FA concentrations of C6:0, C8:0, C10:0, C16:0, and C16:0-iso were significantly or tended to be positively related to CH4 yield per unit of feed. Concentrations of trans-6+7+8+9 C18:1, trans-10+11 C18:1, cis- 11 C18:1, cis-12 C18:1, cis-13 C18:1, trans-16+cis-14 C18:1, and cis-9,12 C18:2 in milk fat were significantly or tended to be negatively related to CH4 yield per unit of feed. Milk FA concentrations of C10:0, C12:0, C14:0-iso, C14:0, cis-9 C14:1, C15:0, and C16:0 were significantly or tended to be positively related to CH4 yield per unit of milk. Concentrations of C4:0, C18:0, trans-10+11 C18:1, cis-9 C18:1, cis-11 C18:1, and cis- 9,12 C18:2 in milk fat were significantly or tended to be negatively related to CH4 yield per unit of milk. Mixed model multiple regression and a stepwise selection procedure of milk FA based on the Bayesian information criterion to predict CH4 yield with milk FA as input (g/100 g of FA) resulted in the following prediction equations: CH4 (g/kg of DMI) = 23.39 + 9.74 × C16:0- iso – 1.06 × trans-10+11 C18:1 – 1.75 × cis-9,12 C18:2 (R2 = 0.54), and CH4 (g/kg of FPCM) = 21.13 – 1.38 × C4:0 + 8.53 × C16:0-iso – 0.22 × cis-9 C18:1 – 0.59 × trans-10+11 C18:1 (R2 = 0.47). This indicated that milk FA profile has a moderate potential for predicting CH4 yield per unit of feed and a slightly lower potential for predicting CH4 yield per unit of milk. Key words: methane , milk fatty acid profile , metaanalysis , dairy cattle
Resumo:
Replacement and upgrading of assets in the electricity network requires financial investment for the distribution and transmission utilities. The replacement and upgrading of network assets also represents an emissions impact due to the carbon embodied in the materials used to manufacture network assets. This paper uses investment and asset data for the GB system for 2015-2023 to assess the suitability of using a proxy with peak demand data and network investment data to calculate the carbon impacts of network investments. The proxies are calculated on a regional basis and applied to calculate the embodied carbon associated with current network assets by DNO region. The proxies are also applied to peak demand data across the 2015-2023 period to estimate the expected levels of embodied carbon that will be associated with network investment during this period. The suitability of these proxies in different contexts are then discussed, along with initial scenario analysis to calculate the impact of avoiding or deferring network investments through distributed generation projects. The proxies were found to be effective in estimating the total embodied carbon of electricity system investment in order to compare investment strategies in different regions of the GB network.
Resumo:
Obesity prevalence is increasing. The management of this condition requires a detailed analysis of the global risk factors in order to develop personalised advice. This study is aimed to identify current dietary patterns and habits in Spanish population interested in personalised nutrition and investigate associations with weight status. Self-reported dietary and anthropometrical data from the Spanish participants in the Food4Me study, were used in a multidimensional exploratory analysis to define specific dietary profiles. Two opposing factors were obtained according to food groups’ intake: Factor 1 characterised by a more frequent consumption of traditionally considered unhealthy foods; and Factor 2, where the consumption of “Mediterranean diet” foods was prevalent. Factor 1 showed a direct relationship with BMI (β = 0.226; r2 = 0.259; p < 0.001), while the association with Factor 2 was inverse (β = −0.037; r2 = 0.230; p = 0.348). A total of four categories were defined (Prudent, Healthy, Western, and Compensatory) through classification of the sample in higher or lower adherence to each factor and combining the possibilities. Western and Compensatory dietary patterns, which were characterized by high-density foods consumption, showed positive associations with overweight prevalence. Further analysis showed that prevention of overweight must focus on limiting the intake of known deleterious foods rather than exclusively enhance healthy products.
Resumo:
The South American (SA) rainy season is studied in this paper through the application of a multivariate Empirical Orthogonal Function (EOF) analysis to a SA gridded precipitation analysis and to the components of Lorenz Energy Cycle (LEC) derived from the National Centers for Environmental Prediction (NCEP) reanalysis. The EOF analysis leads to the identification of patterns of the rainy season and the associated mechanisms in terms of their energetics. The first combined EOF represents the northwest-southeast dipole of the precipitation between South and Central America, the South American Monsoon System (SAMS). The second combined EOF represents a synoptic pattern associated with the SACZ (South Atlantic convergence zone) and the third EOF is in spatial quadrature to the second EOF. The phase relationship of the EOFs, as computed from the principal components (PCs), suggests a nonlinear transition from the SACZ to the fully developed SAMS mode by November and between both components describing the SACZ by September-October (the rainy season onset). According to the LEC, the first mode is dominated by the eddy generation term at its maximum, the second by both baroclinic and eddy generation terms and the third by barotropic instability previous to the connection to the second mode by September-October. The predominance of the different LEC components at each phase of the SAMS can be used as an indicator of the onset of the rainy season in terms of physical processes, while the existence of the outstanding spectral peaks in the time dependence of the EOFs at the intraseasonal time scale could be used for monitoring purposes. Copyright (C) 2009 Royal Meteorological Society
Resumo:
Universal properties of the Coulomb interaction energy apply to all many-electron systems. Bounds on the exchange-correlation energy, in particular, are important for the construction of improved density functionals. Here we investigate one such universal property-the Lieb-Oxford lower bound-for ionic and molecular systems. In recent work [J Chem Phys 127, 054106 (2007)], we observed that for atoms and electron liquids this bound may be substantially tightened. Calculations for a few ions and molecules suggested the same tendency, but were not conclusive due to the small number of systems considered. Here we extend that analysis to many different families of ions and molecules, and find that for these, too, the bound can be empirically tightened by a similar margin as for atoms and electron liquids. Tightening the Lieb-Oxford bound will have consequences for the performance of various approximate exchange-correlation functionals. (C) 2008 Wiley Periodicals Inc.
Resumo:
This paper describes the development and evaluation of a sequential injection method to automate the determination of methyl parathion by square wave adsorptive cathodic stripping voltammetry exploiting the concept of monosegmented flow analysis to perform in-line sample conditioning and standard addition. Accumulation and stripping steps are made in the sample medium conditioned with 40 mmol L-1 Britton-Robinson buffer (pH 10) in 0.25 mol L-1 NaNO3. The homogenized mixture is injected at a flow rate of 10 mu Ls(-1) toward the flow cell, which is adapted to the capillary of a hanging drop mercury electrode. After a suitable deposition time, the flow is stopped and the potential is scanned from -0.3 to -1.0 V versus Ag/AgCl at frequency of 250 Hz and pulse height of 25 mV The linear dynamic range is observed for methyl parathion concentrations between 0.010 and 0.50 mgL(-1), with detection and quantification limits of 2 and 7 mu gL(-1), respectively. The sampling throughput is 25 h(-1) if the in line standard addition and sample conditioning protocols are followed, but this frequency can be increased up to 61 h(-1) if the sample is conditioned off-line and quantified using an external calibration curve. The method was applied for determination of methyl parathion in spiked water samples and the accuracy was evaluated either by comparison to high performance liquid chromatography with UV detection, or by the recovery percentages. Although no evidences of statistically significant differences were observed between the expected and obtained concentrations, because of the susceptibility of the method to interference by other pesticides (e.g., parathion, dichlorvos) and natural organic matter (e.g., fulvic and humic acids), isolation of the analyte may be required when more complex sample matrices are encountered. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Citrus sudden death (CSD) is a disease of unknown etiology that greatly affects sweet oranges grafted on Rangpur lime rootstock, the most important rootstock in Brazilian citriculture. We performed a proteomic analysis to generate information related to this plant pathogen interaction. Protein profiles from healthy, CSD-affected and CSD-tolerant stem barks, were generated using two-dimensional gel electrophoresis. The protein spots were well distributed over a pI range of 3.26 to 9.97 and a molecular weight (MW) range from 7.1 to 120 kDa. The patterns of expressed proteins on 2-DE gels made it possible to distinguish healthy barks from CSD-affected barks. Protein spots with MW around 30 kDa and pI values ranging from 4.5 to 5.2 were down-regulated in the CSD-affected rootstock bark. This set of protein spots was identified as chitinases. Another set of proteins, ranging in pI from 6.1 to 9.6 with an MW of about 20 kDa, were also suppressed in CSD-affected rootstock bark; these were identified as miraculin-like proteins, potential trypsin inhibitors. Downregulation of chitinases and proteinase inhibitors in CSD-affected plants is relevant since chitinases are well-known pathogenesis-related protein, and their activity against plant pathogens is largely accepted.
Resumo:
The development of large discount retailers, or big-boxes as they are sometimes referred to, are often subject to heated debate and their entry on a market is greeted with either great enthusiasm or dread. For instance, the world’s largest retailer Wal-Mart (Forbes 2014) has a number of anti- and pro-groups dedicated to its being and the event of a Wal-Mart entry tends to be met with protests and campaigns (Decamme 2013) but also welcomed by, for instance, consumers (Davis & DeBonis 2013). Also in Sweden, the entry of a big box is a hot topic and before IKEA’s opening i Borlänge 2013, the first in Sweden in more than five years, great expectations were mixed with worry (Västerbottens-Kuriren 2011).The presence of large scale discount retailers is not, however, a novel phenomenon but a part of a long-term change in retailing that has taken place globally over the past couple of decades (Taylor & Smalling, 2005). As noted by Dawson (2006), the trend in Europe has over the past few decades gone towards an increasing concentration of large firms along with a decrease of smaller firms.This trend is also detectable in the Swedish retail industry. Over the past decade, the retailing industry in Sweden has increased by around 190 Billion SEK, and its share of GDP has risen from 2,7% to 2,9%, while the number of employees have increased from 200 000 to 250 000 (HUI 2013). This growth, however, has not been distributed evenly but rather it has been oriented mainly towards out-of-town retail clusters. Parallel to this development, the number of large retailers has risen at the expense of market shares of smaller independent firms (Rämme et al 2010). Thereby, the presence of large scale retailers is simply part of a changing retail landscape.The effects of this development, where large scale retailing agents relocate shopping to out-of-town shopping areas, have been heavily debated. On the one hand, the big-boxes are accused of displacing independent small retail businesses in the city-centers and the residential areas, resulting in, to some extent, reduced employment opportunities and less availability for the consumers - especially the elderly (Ljungberg et al 2006). In addition, as access to shopping now tends to require some sort of a motorized vehicle, environmental aspects to the discussion have emerged. Ultimately these types of concerns have resulted in calls for regulations against this development (Olsson 2010). On the other hand, the proponents of the new shopping landscape argue that this evolution implies productivity gains, the benefits of lower prices and an increased variety of products (Maican & Orth 2012). Moreover it is argued that it leads to, for instance, better services (such as longer opening hours) and a creative destruction transformation pressure on retailers, which brings about a renewal of city-centerIIretail and services, increasing their attractivity (Bergström 2010). The belief in benefits of a big box entry can be exemplified by the attractivity of IKEA, and the fact that municipalities are prepared to commit to expenses amounting up to hundreds of millions in order to attract the entry of this big-box. Borlänge municipality, for instance, agreed to expenses of about 350 million SEK in order to secure the entry of IKEA, which opened in 2013 (Blomgren 2009).Against this backdrop, the overall effects of large discount retailers become important: Are the economic benefits enough to warrant subsidies or are there, on the contrary, some very compelling grounds for regulations against these types of establishments? In other words; how is overall retail in a region where a store like IKEA enters affected? And how are local retail firms affected?In order to answer these questions, the purpose of this thesis is to study how entry of a big-box retailer affects the entry region. The object of this study is IKEA - one of the world’s largest retailers, with 345 stores, active in over 40 countries and with profits of about 3.3 billion (IKEA 2013; IKEA 2014). By studying the effects of IKEA-entry, both on an aggregated level and on firm level, this thesis intends to find indications of how large discount retail establishments in general can be expected to affect the economic development both in a region overall, but also on the local firm level, something which is of interest to both policymakers as well as the retailing industry in general.The first paper examines the effects of IKEA on retail revenues and employment in the municipalities that IKEA chose to enter between 2000 and 2011; Gothenburg, Haparanda, Kalmar and Karlstad. By means of a matching method we first identify non-entry municipalities that have a similar probability of IKEA entry as the true entry municipalities. Then, using these non-entry municipalities as a control group, the causal effects of IKEA entry can be estimated using a treatment-control approach. We also extend the analysis to examine the spatial impact of IKEA by estimating the effects on retail in neighboring municipalities. It is found that a new IKEA store increases revenues in durable goods trade with 20% in the entry municipality and the number of employees with 17%. Only small, and in most cases statistically insignificant, negative effects were found in neighboring municipalities.It appears that there is a positive net effect on durables retail sales and employment in the entry municipality. However, the analysis is based on data on an aggregated municipality level and thereby it remains unclear if and how the effects vary within the entry municipalities. In addition, the data used in the first study includes the sales and employment of IKEA itself, which could account for the majority of the increases in employment and retail. Thereby the potential spillover effects on incumbent retailers in the entry municipalities cannot be discerned in the first study.IIITo examine effects of IKEA entry on incumbent retail firms, the second paper in this thesis analyses how IKEA entry affects the revenues and employment of local retail firms in three municipalities; Haparanda, Kalmar and Karlstad, which experienced entry by IKEA between 2000 and 2010. In this second study, we exclude Gothenburg due to the fact that big-box entry appears to have weaker effects in metropolitan areas (as indicated by Artz & Stone 2006). By excluding Gothenburg we aim to reduce the geographical heterogeneity in our study. We obtain control municipalities that are as similar as possible to the three entry municipalities using the same method as in the previous study, but including a slightly different set of variables in the selection equation. Using similar retail firms in the control municipalities as our comparison group, we estimate the impact of IKEA entry on revenues and employment for retail firms located at varying distances from the IKEA entry site.The results generated in this study imply that entry by IKEA increases revenues in incumbent retail firms by, on average, 11% in the entry municipalities. In addition, we do not find any significant impact on retail revenues in the city centers of the entry municipalities. However, we do find that retail firms within 1 km of the IKEA experience increases in revenues of about 26%, which indicates large spillover effects in the area nearby the entry site. As expected, this impact decreases as we expand the buffer zone: firms located between 0-2 km experiences a 14% increase and firms in 2-5 km experiences an increase of 10%. We do not find any significant impacts on retail employment.
Resumo:
In all applications of clone detection it is important to have precise and efficient clone identification algorithms. This paper proposes and outlines a new algorithm, KClone for clone detection that incorporates a novel combination of lexical and local dependence analysis to achieve precision, while retaining speed. The paper also reports on the initial results of a case study using an implementation of KClone with which we have been experimenting. The results indi- cate the ability of KClone to find types-1,2, and 3 clones compared to token-based and PDG-based techniques. The paper also reports results of an initial empirical study of the performance of KClone compared to CCFinderX.
Resumo:
The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.
Resumo:
We investigate the issue of whether there was a stable money demand function for Japan in 1990's using both aggregate and disaggregate time series data. The aggregate data appears to support the contention that there was no stable money demand function. The disaggregate data shows that there was a stable money demand function. Neither was there any indication of the presence of liquidity trapo Possible sources of discrepancy are explored and the diametrically opposite results between the aggregate and disaggregate analysis are attributed to the neglected heterogeneity among micro units. We also conduct simulation analysis to show that when heterogeneity among micro units is present. The prediction of aggregate outcomes, using aggregate data is less accurate than the prediction based on micro equations. Moreover. policy evaluation based on aggregate data can be grossly misleading.
Resumo:
The papers aims at considering the issue of relative efficiency measurement in the context of the public sector. In particular, we consider the efficiency measurement approach provided by Data Envelopment Analysis (DEA). The application considered the main Brazilian federal universities for the year of 1994. Given the large number of inputs and outputs, this paper advances the idea of using factor analysis to explore common dimensions in the data set. Such procedure made possible a meaningful application of DEA, which finally provided a set of efficiency scores for the universities considered .