926 resultados para Total Analysis Systems
Resumo:
Acacia senegal, the gum arabic producing tree, is the most important component in traditional dryland agroforestry systems in the Blue Nile region, Sudan. The aim of the present study was to provide new knowledge on the potential use of A. senegal in dryland agroforestry systems on clay soils, as well as information on tree/crop interaction, and on silvicultural and management tools, with consideration on system productivity, nutrient cycling and sustainability. Moreover, the aim was also to clarify the intra-specific variation in the performance of A. senegal and, specifically, the adaptation of trees of different origin to the clay soils of the Blue Nile region. In agroforestry systems established at the beginning of the study, tree and crop growth, water use, gum and crop yields, nutrient cycling and system performance were investigated for a period of four years (1999 to 2002). Trees were grown at 5 x 5 m and 10 x 10 m spacing alone or in mixture with sorghum or sesame; crops were also grown in sole culture. The symbiotic biological N2 fixation by A. senegal was estimated using the 15N natural abundance (δ15N) procedure in eight provenances collected from different environments and soil types of the gum arabic belt and grown in clay soil in the Blue Nile region. Balanites aegyptiaca (a non-legume) was used as a non-N-fixing reference tree species, so as to allow 15N-based estimates of the proportion of the nitrogen in trees derived from the atmosphere. In the planted acacia trees, measurements were made on shoot growth, water-use efficiency (as assessed by the δ13C method) and (starting from the third year) gum production. Carbon isotope ratios were obtained from the leaves and branch wood samples. The agroforestry system design caused no statistically significant variation in water use, but the variation was highly significant between years, and the highest water use occurred in the years with high rainfall. No statistically significant differences were found in sorghum or sesame yields when intercropping and sole crop systems were compared (yield averages were 1.54 and 1.54 ha-1 for sorghum and 0.36 and 0.42 t ha-1 for sesame in the intercropped and mono-crop plots, respectively). Thus, at an early stage of agroforestry system management, A. senegal had no detrimental effect on crop yield, but the pattern of resource capture by trees and crops may change as the system matures. Intercropping resulted in taller trees and larger basal and crown diameters as compared to the development of sole trees. It also resulted in a higher land equivalent ratio. When gum yields were analysed it was found that a significant positive relationship existed between the second gum picking and the total gum yield. The second gum picking seems to be a decisive factor in gum production and could be used as an indicator for the total gum yield in a particular year. In trees, the concentrations of N and P were higher in leaves and roots, whereas the levels of K were higher in stems, branches and roots. Soil organic matter, N, P and K contents were highest in the upper soil stratum. There was some indication that the P content slightly increased in the topsoil as the agroforestry plantations aged. At a stocking of 400 trees ha-1 (5 x 5 m spacing), A. senegal accumulated in the biomass a total of 18, 1.21, 7.8 and 972 kg ha-1of N, P, K and OC, respectively. Trees contributed ca. 217 and 1500 kg ha-1 of K and OC, respectively, to the top 25-cm of soil over the first four years of intercropping. Acacia provenances of clay plain origin showed considerable variation in seed weight. They also had the lowest average seed weight as compared to the sandy soil (western) provenances. At the experimental site in the clay soil region, the clay provenances were distinctly superior to the sand provenances in all traits studied but especially in basal diameter and crown width, thus reflecting their adaptation to the environment. Values of δ13C, indicating water use efficiency, were higher in the sand soil group as compared to the clay one, both in leaves and in branch wood. This suggests that the sand provenances (with an average value of -28.07 ) displayed conservative water use and high drought tolerance. Of the clay provenances, the local one (Bout) displayed a highly negative (-29.31 ) value, which indicates less conservative water use that resulted in high productivity at this particular clay-soil site. Water use thus appeared to correspond to the environmental conditions prevailing at the original locations for these provenances. Results suggest that A. senegal provenances from the clay part of the gum belt are adapted for a faster growth rate and higher biomass and gum productivity as compared to provenances from sand regions. A strong negative relationship was found between the per-tree gum yield and water use efficiency, as indicated by δ13C. The differences in water use and gum production were greater among provenance groups than within them, suggesting that selection among rather than within provenances would result in distinct genetic gain in gum yield. The relative δ15N values ( ) were higher in B. aegyptiaca than in the N2-fixing acacia provenances. The amount of Ndfa increased significantly with age in all provenances, indicating that A. senegal is a potentially efficient nitrogen fixer and has an important role in t agroforestry development. The total above-ground contribution of fixed N to foliage growth in 4-year-old A. senegal trees was highest in the Rahad sand-soil provenance (46.7 kg N ha-1) and lowest in the Mazmoom clay-soil provenance (28.7 kg N ha-1). This study represents the first use of the δ15N method for estimating the N input by A. senegal in the gum belt of Sudan. Key words: Acacia senegal, agroforestry, clay plain, δ13C, δ15N, gum arabic, nutrient cycling, Ndfa, Sorghum bicolor, Sesamum indicum
Resumo:
The financial health of beef cattle enterprises in northern Australia has declined markedly over the last decade due to an escalation in production and marketing costs and a real decline in beef prices. Historically, gains in animal productivity have offset the effect of declining terms of trade on farm incomes. This raises the question of whether future productivity improvements can remain a key path for lifting enterprise profitability sufficient to ensure that the industry remains economically viable over the longer term. The key objective of this study was to assess the production and financial implications for north Australian beef enterprises of a range of technology interventions (development scenarios), including genetic gain in cattle, nutrient supplementation, and alteration of the feed base through introduced pastures and forage crops, across a variety of natural environments. To achieve this objective a beef systems model was developed that is capable of simulating livestock production at the enterprise level, including reproduction, growth and mortality, based on energy and protein supply from natural C4 pastures that are subject to high inter-annual climate variability. Comparisons between simulation outputs and enterprise performance data in three case study regions suggested that the simulation model (the Northern Australia Beef Systems Analyser) can adequately represent the performance beef cattle enterprises in northern Australia. Testing of a range of development scenarios suggested that the application of individual technologies can substantially lift productivity and profitability, especially where the entire feedbase was altered through legume augmentation. The simultaneous implementation of multiple technologies that provide benefits to different aspects of animal productivity resulted in the greatest increases in cattle productivity and enterprise profitability, with projected weaning rates increasing by 25%, liveweight gain by 40% and net profit by 150% above current baseline levels, although gains of this magnitude might not necessarily be realised in practice. While there were slight increases in total methane output from these development scenarios, the methane emissions per kg of beef produced were reduced by 20% in scenarios with higher productivity gain. Combinations of technologies or innovative practices applied in a systematic and integrated fashion thus offer scope for providing the productivity and profitability gains necessary to maintain viable beef enterprises in northern Australia into the future.
Resumo:
Spectral efficiency is a key characteristic of cellular communications systems, as it quantifies how well the scarce spectrum resource is utilized. It is influenced by the scheduling algorithm as well as the signal and interference statistics, which, in turn, depend on the propagation characteristics. In this paper we derive analytical expressions for the short-term and long-term channel-averaged spectral efficiencies of the round robin, greedy Max-SINR, and proportional fair schedulers, which are popular and cover a wide range of system performance and fairness trade-offs. A unified spectral efficiency analysis is developed to highlight the differences among these schedulers. The analysis is different from previous work in the literature in the following aspects: (i) it does not assume the co-channel interferers to be identically distributed, as is typical in realistic cellular layouts, (ii) it avoids the loose spectral efficiency bounds used in the literature, which only considered the worst case and best case locations of identical co-channel interferers, (iii) it explicitly includes the effect of multi-tier interferers in the cellular layout and uses a more accurate model for handling the total co-channel interference, and (iv) it captures the impact of using small modulation constellation sizes, which are typical of cellular standards. The analytical results are verified using extensive Monte Carlo simulations.
Resumo:
We investigated differences in delta N-15 of seston and icefishes from seven freshwater ecosystems with different trophic states in China. An increase of seston delta N-15 values was accompanied by an increase of total nitrogen and phosphorus concentrations. Significantly positive correlations were observed between delta N-15 of icefishes and delta N-15 of seston, total nitrogen and phosphorus concentrations. This study demonstrated that icefishes could be preferred indicators of anthropogenic contamination in test systems because they integrated waste inputs over long time periods and reflected the movement of waste through the pelagic food chain.
Resumo:
The nonlinearity of high-power amplifiers (HPAs) has a crucial effect on the performance of multiple-input-multiple-output (MIMO) systems. In this paper, we investigate the performance of MIMO orthogonal space-time block coding (OSTBC) systems in the presence of nonlinear HPAs. Specifically, we propose a constellation-based compensation method for HPA nonlinearity in the case with knowledge of the HPA parameters at the transmitter and receiver, where the constellation and decision regions of the distorted transmitted signal are derived in advance. Furthermore, in the scenario without knowledge of the HPA parameters, a sequential Monte Carlo (SMC)-based compensation method for the HPA nonlinearity is proposed, which first estimates the channel-gain matrix by means of the SMC method and then uses the SMC-based algorithm to detect the desired signal. The performance of the MIMO-OSTBC system under study is evaluated in terms of average symbol error probability (SEP), total degradation (TD) and system capacity, in uncorrelated Nakagami-m fading channels. Numerical and simulation results are provided and show the effects on performance of several system parameters, such as the parameters of the HPA model, output back-off (OBO) of nonlinear HPA, numbers of transmit and receive antennas, modulation order of quadrature amplitude modulation (QAM), and number of SMC samples. In particular, it is shown that the constellation-based compensation method can efficiently mitigate the effect of HPA nonlinearity with low complexity and that the SMC-based detection scheme is efficient to compensate for HPA nonlinearity in the case without knowledge of the HPA parameters.
Resumo:
We have measured the elastic scattering cross-section for (8)Li + (9)Be and (8)Li + (51)V systems at 19.6 MeV and 18.5 MeV, respectively. We have also extracted total reaction cross sections from the elastic scattering analysis for several light weakly bound systems using the optical model with Woods-Saxon and double-folding-type potentials. Different reduction methods for the total reaction cross-sections have been applied to analyze and compare simultaneously all the systems.
Resumo:
This paper aims to analyze dual-purpose systems focusing the total cost optimization; a superstructure is proposed to present cogeneration systems and desalination technologies alternatives for the synthesis process. The superstructure consists of excluding components, gas turbines or conventional steam generators with excluding alternatives of supplying fuel for each combustion system. Also, backpressure or condensing/extraction steam turbine for supplying process steam could be selected. Finally one desalination unit chosen between electrically-driven or steam-driven reverse osmosis. multi-effect and multistage flash should be included. The analysis herein performed is based on energy and mass conservation equations, as well as the technological limiting equation of equipment. The results for ten different commercial gas turbines revealed that electrically-driven reverse osmosis was always chosen together with both natural gas and gasified biomass gas turbines. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The use of acid etchants to produce surface demineralization and collagen network exposure, allowing adhesive monomers interdiffusion and consequently the formation of a hybrid layer, has been considered the most efficient mechanism of dentin bonding. The aim of this study was to compare the tensile bond strength to dentin of three adhesive systems, two self-etching ones (Clearfil SE Bond - CSEB and One Up Bond F - OUBF) and one total-etching one (Single Bond - SB), under three dentinal substrate conditions (wet, dry and re-wet). Ninety human, freshly extracted third molars were sectioned at the occlusal surface to remove enamel and to form a flat dentin wall. The specimens were restored with composite resin (Filtek Z250) and submitted to tensile bond strength testing (TBS) in an MTS 810. The data were submitted to two-way ANOVA and Tukey's test (p = 0.05). Wet dentin presented the highest TBS values for SB and CSEB. Dry dentin and re-wet produced significantly lower TBS values when using SB. OUBF was not affected by the different conditions of the dentin substrate, producing similar TBS values regardless of the surface pretreatments.
Resumo:
This study investigates the growth and metabolite production of microorganisms causing spoilage of Atlantic cod (Gadus morhua) fillets packaged under air and modified atmosphere (60 % CO2, 40 % O2). Samples were provided by two different retailers (A and B). Storage of packaged fillets occurred at 4 °C and 8 °C. Microbiological quality and metabolite production of cod fillets stored in MAP 4 °C, MAP 8 °C and air were monitored during 13 days, 7 days and 3 days of storage, respectively. Volatile compounds concentration in the headspace were quantified by Selective ion flow tube mass spectrometry and a correlation with microbiological spoilage was studied. The onset of volatile compounds detection was observed to be mostly around 7 log cfu/g of total psychrotrophic count. Trimethylamine and dimethyl sulfide were found to be the dominant volatiles in all of the tested storage conditions, nevertheless there was no close correlation between concentrations of each main VOC and percentages of rejection based on sensory evaluation. According to results it was concluded that they cannot be considered as only indicators of the quality of cod fillets stored in modified atmosphere and air.
Resumo:
Three sediment cores from the Bragança Peninsula located in the coastal region in the north-eastern portion of Pará State have been studied by pollen analysis to reconstruct Holocene environmental changes and dynamics of the mangrove ecosystem. The cores were taken from an Avicennia forest (Bosque de Avicennia (BDA)), a salt marsh area (Campo Salgado (CS)) and a Rhizophora dominated area (Furo do Chato). Pollen traps were installed in five different areas of the peninsula to study modern pollen deposition. Nine accelerator mass spectrometry radiocarbon dates provide time control and show that sediment deposits accumulated relatively undisturbed. Mangrove vegetation started to develop at different times at the three sites: at 5120 14C yr BP at the CS site, at 2170 14C yr BP at the BDA site and at 1440 14C yr BP at the FDC site. Since mid Holocene times, the mangroves covered even the most elevated area on the peninsula, which is today a salt marsh, suggesting somewhat higher relative sea-levels. The pollen concentration in relatively undisturbed deposits seems to be an indicator for the frequency of inundation. The tidal inundation frequency decreased, probably related to lower sea-levels, during the late Holocene around 1770 14C yr BP at BDA, around 910 14C yr BP at FDC and around 750 14C yr BP at CS. The change from a mangrove ecosystem to a salt marsh on the higher elevation, around 420 14C yr BP is probably natural and not due to an anthropogenic impact. Modern pollen rain from different mangrove types show different ratios between Rhizophora and Avicennia pollen, which can be used to reconstruct past composition of the mangrove. In spite of bioturbation and especially tidal inundation, which change the local pollen deposition within the mangrove zone, past mangrove dynamics can be reconstructed. The pollen record for BDA indicates a mixed Rhizophora/Avicennia mangrove vegetation between 2170 and 1770 14C yr BP. Later Rhizophora trees became more frequent and since ca. 200 14C yr BP Avicennia dominated in the forest.
Resumo:
La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.
Resumo:
Optical filters are crucial elements in optical communication networks. Their influence toward the optical signal will affect the communication quality seriously. In this paper we will study and simulate the optical signal impairment and crosstalk penalty caused by different kinds of filters, which include Butterworth, Bessel, Fiber Bragg Grating (FBG) and Fabry-Perot (F-P). Signal impairment from filter concatenation effect and crosstalk penalty from out-band and in-band are analyzed from Q-penalty, eye opening penalty (EOP) and optical spectrum. The simulation results show that signal impairment and crosstalk penalty induced by the Butterworth filter is the minimum among these four types of filters. Signal impairment caused by filter concatenation effect shows that when center frequency of all filters is aligned perfectly with the laser's frequency, 12 50-GHz Butterworth filters can be cascaded, with 1-dB EOP. This value is reduced to 9 when the center frequency is misaligned with 5 GHz. In the 50-GHz channel spacing DWDM networks, total Q-penalty induced by a pair of Butterworth filters based demultiplexer and multiplexer is lower than 0.5 dB when the filter bandwidth is in the range of 42-46 GHz.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.