107 resultados para Raritan Bay (N.J. and N.Y.)--Maps.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Barium ferrites substituted by Mn–Sn, Co–Sn, and Mn–Co–Sn with general formulae BaFe12−2xMnxSnxO19 (x=0.2–1.0), BaFe12−2xCoxSnxO19 (x=0.2–0.8), and BaFe12−2xCox/2Mnx/2SnxO19 (x=0.1–0.6), respectively, have been prepared by a previously reported co-precipitation method. The efficiency of the method was refined by lowering the reaction temperature and shortening the required reaction time, due to which crystallinity improved and the value of saturated magnetization increased as well. Low coercivity temperature coefficients, which are adjustable by doping, were achieved by Mn–Sn and Mn–Co–Sn doping. Synthesis efficiency and the effect of doping are discussed taking into account accumulated data concerning the synthesis and crystal structure of ferrites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our new molecular understanding of immune priming states that dendritic cell activation is absolutely pivotal for expansion and differentiation of naïve T lymphocytes, and it follows that understanding DC activation is essential to understand and design vaccine adjuvants. This chapter describes how dendritic cells can be used as a core tool to provide detailed quantitative and predictive immunomics information about how adjuvants function. The role of distinct antigen, costimulation, and differentiation signals from activated DC in priming is explained. Four categories of input signals which control DC activation – direct pathogen detection, sensing of injury or cell death, indirect activation via endogenous proinflammatory mediators, and feedback from activated T cells – are compared and contrasted. Practical methods for studying adjuvants using DC are summarised and the importance of DC subset choice, simulating T cell feedback, and use of knockout cells is highlighted. Finally, five case studies are examined that illustrate the benefit of DC activation analysis for understanding vaccine adjuvant function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we examine the temporal stability of the evidence for two commodity futures pricing theories. We investigate whether the forecast power of commodity futures can be attributed to the extent to which they exhibit seasonality and we also consider whether there are time varying parameters or structural breaks in these pricing relationships. Compared to previous studies, we find stronger evidence of seasonality in the basis, which supports the theory of storage. The power of the basis to forecast subsequent price changes is also strengthened, while results on the presence of a risk premium are inconclusive. In addition, we show that the forecasting power of commodity futures cannot be attributed to the extent to which they exhibit seasonality. We find that in most cases where structural breaks occur, only changes in the intercepts and not the slopes are detected, illustrating that the forecast power of the basis is stable over different economic environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monthly averaged surface erythemal solar irradiance (UV-Ery) for local noon from 1960 to 2100 has been derived using radiative transfer calculations and projections of ozone, temperature and cloud change from 14 chemistry climate models (CCM), as part of the CCMVal-2 activity of SPARC. Our calculations show the influence of ozone depletion and recovery on erythemal irradiance. In addition, we investigate UV-Ery changes caused by climate change due to increasing greenhouse gas concentrations. The latter include effects of both stratospheric ozone and cloud changes. The derived estimates provide a global picture of the likely changes in erythemal irradiance during the 21st century. Uncertainties arise from the assumed scenarios, different parameterizations – particularly of cloud effects on UV-Ery – and the spread in the CCM projections. The calculations suggest that relative to 1980, annually mean UV-Ery in the 2090s will be on average 12% lower at high latitudes in both hemispheres, 3% lower at mid latitudes, and marginally higher (1 %) in the tropics. The largest reduction (16 %) is projected for Antarctica in October. Cloud effects are responsible for 2–3% of the reduction in UV-Ery at high latitudes, but they slightly moderate it at mid-latitudes (1 %). The year of return of erythemal irradiance to values of certain milestones (1965 and 1980) depends largely on the return of column ozone to the corresponding levels and is associated with large uncertainties mainly due to the spread of the model projections. The inclusion of cloud effects in the calculations has only a small effect of the return years. At mid and high latitudes, changes in clouds and stratospheric ozone transport by global circulation changes due to greenhouse gases will sustain the erythemal irradiance at levels below those in 1965, despite the removal of ozone depleting substances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of the Chemistry‐Climate Model Validation (CCMVal) activity is to improve understanding of chemistry‐climate models (CCMs) through process‐oriented evaluation and to provide reliable projections of stratospheric ozone and its impact on climate. An appreciation of the details of model formulations is essential for understanding how models respond to the changing external forcings of greenhouse gases and ozonedepleting substances, and hence for understanding the ozone and climate forecasts produced by the models participating in this activity. Here we introduce and review the models used for the second round (CCMVal‐2) of this intercomparison, regarding the implementation of chemical, transport, radiative, and dynamical processes in these models. In particular, we review the advantages and problems associated with approaches used to model processes of relevance to stratospheric dynamics and chemistry. Furthermore, we state the definitions of the reference simulations performed, and describe the forcing data used in these simulations. We identify some developments in chemistry‐climate modeling that make models more physically based or more comprehensive, including the introduction of an interactive ocean, online photolysis, troposphere‐stratosphere chemistry, and non‐orographic gravity‐wave deposition as linked to tropospheric convection. The relatively new developments indicate that stratospheric CCM modeling is becoming more consistent with our physically based understanding of the atmosphere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulations of 15 coupled chemistry climate models, for the period 1960–2100, are presented. The models include a detailed stratosphere, as well as including a realistic representation of the tropospheric climate. The simulations assume a consistent set of changing greenhouse gas concentrations, as well as temporally varying chlorofluorocarbon concentrations in accordance with observations for the past and expectations for the future. The ozone results are analyzed using a nonparametric additive statistical model. Comparisons are made with observations for the recent past, and the recovery of ozone, indicated by a return to 1960 and 1980 values, is investigated as a function of latitude. Although chlorine amounts are simulated to return to 1980 values by about 2050, with only weak latitudinal variations, column ozone amounts recover at different rates due to the influence of greenhouse gas changes. In the tropics, simulated peak ozone amounts occur by about 2050 and thereafter total ozone column declines. Consequently, simulated ozone does not recover to values which existed prior to the early 1980s. The results also show a distinct hemispheric asymmetry, with recovery to 1980 values in the Northern Hemisphere extratropics ahead of the chlorine return by about 20 years. In the Southern Hemisphere midlatitudes, ozone is simulated to return to 1980 levels only 10 years ahead of chlorine. In the Antarctic, annually averaged ozone recovers at about the same rate as chlorine in high latitudes and hence does not return to 1960s values until the last decade of the simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concepts of on-line transactional processing (OLTP) and on-line analytical processing (OLAP) are often confused with the technologies or models that are used to design transactional and analytics based information systems. This in some way has contributed to existence of gaps between the semantics in information captured during transactional processing and information stored for analytical use. In this paper, we propose the use of a unified semantics design model, as a solution to help bridge the semantic gaps between data captured by OLTP systems and the information provided by OLAP systems. The central focus of this design approach is on enabling business intelligence using not just data, but data with context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The response of nitrification potentials, denitrification potentials, and N removal efficiency to the introduction of earthworms and wetland plants in a vertical flow constructed wetland system was investigated. Addition of earthworms increased nitrification and denitrification potentials of substrate in non-vegetated constructed wetland by 236% and 8%, respectively; it increased nitrification and denitrification potentials in rhizosphere in vegetated constructed wetland (Phragmites austrail, Typha augustifolia and Canna indica), 105% and 5%, 187% and 12%, and 268% and 15% respectively. Denitrification potentials in rhizosphere of three wetland plants were not significantly different, but nitrification potentials in rhizosphere followed the order of C. indica > T. augustifolia > P. australis when addition of earthworms into constructed wetland. Addition of earthworms to the vegetated constructed significantly increased the total number of bacteria and fungi of substrates (P < 0.05). The total number of bacteria was significantly correlated with nitrification potentials (r = 913, P < 0.01) and denitrification potentials (r = 840, P < 0.01), respectively. The N concentration of stems and leaves of C. indica were significantly higher in the constructed wetland with earthworms (P < 0.05). Earthworms had greater impact on nitrification potentials than denitrification potentials. The removal efficiency of N was improved via stimulated nitrification potentials by earthworms and higher N uptake by wetland plants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulations of polar ozone losses were performed using the three-dimensional high-resolution (1∘ × 1∘) chemical transport model MIMOSA-CHIM. Three Arctic winters 1999–2000, 2001–2002, 2002–2003 and three Antarctic winters 2001, 2002, and 2003 were considered for the study. The cumulative ozone loss in the Arctic winter 2002–2003 reached around 35% at 475 K inside the vortex, as compared to more than 60% in 1999–2000. During 1999–2000, denitrification induces a maximum of about 23% extra ozone loss at 475 K as compared to 17% in 2002–2003. Unlike these two colder Arctic winters, the 2001–2002 Arctic was warmer and did not experience much ozone loss. Sensitivity tests showed that the chosen resolution of 1∘ × 1∘ provides a better evaluation of ozone loss at the edge of the polar vortex in high solar zenith angle conditions. The simulation results for ozone, ClO, HNO3, N2O, and NO y for winters 1999–2000 and 2002–2003 were compared with measurements on board ER-2 and Geophysica aircraft respectively. Sensitivity tests showed that increasing heating rates calculated by the model by 50% and doubling the PSC (Polar Stratospheric Clouds) particle density (from 5 × 10−3 to 10−2 cm−3) refines the agreement with in situ ozone, N2O and NO y levels. In this configuration, simulated ClO levels are increased and are in better agreement with observations in January but are overestimated by about 20% in March. The use of the Burkholder et al. (1990) Cl2O2 absorption cross-sections slightly increases further ClO levels especially in high solar zenith angle conditions. Comparisons of the modelled ozone values with ozonesonde measurement in the Antarctic winter 2003 and with Polar Ozone and Aerosol Measurement III (POAM III) measurements in the Antarctic winters 2001 and 2002, shows that the simulations underestimate the ozone loss rate at the end of the ozone destruction period. A slightly better agreement is obtained with the use of Burkholder et al. (1990) Cl2O2 absorption cross-sections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a simple theoretical land-surface classification that can be used to determine the location and temporal behavior of preferential sources of terrestrial dust emissions. The classification also provides information about the likely nature of the sediments, their erodibility and the likelihood that they will generate emissions under given conditions. The scheme is based on the dual notions of geomorphic type and connectivity between geomorphic units. We demonstrate that the scheme can be used to map potential modern-day dust sources in the Chihuahuan Desert, the Lake Eyre Basin and the Taklamakan. Through comparison with observed dust emissions, we show that the scheme provides a reasonable prediction of areas of emission in the Chihuahuan Desert and in the Lake Eyre Basin. The classification is also applied to point source data from the Western Sahara to enable comparison of the relative importance of different land surfaces for dust emissions. We indicate how the scheme could be used to provide an improved characterization of preferential dust sources in global dust-cycle models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2007, the world reached the unprecedented milestone of half of its people living in cities, and that proportion is projected to be 60% in 2030. The combined effect of global climate change and rapid urban growth, accompanied by economic and industrial development, will likely make city residents more vulnerable to a number of urban environmental problems, including extreme weather and climate conditions, sea-level rise, poor public health and air quality, atmospheric transport of accidental or intentional releases of toxic material, and limited water resources. One fundamental aspect of predicting the future risks and defining mitigation strategies is to understand the weather and regional climate affected by cities. For this reason, dozens of researchers from many disciplines and nations attended the Urban Weather and Climate Workshop.1 Twenty-five students from Chinese universities and institutes also took part. The presentations by the workshop's participants span a wide range of topics, from the interaction between the urban climate and energy consumption in climate-change environments to the impact of urban areas on storms and local circulations, and from the impact of urbanization on the hydrological cycle to air quality and weather prediction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aerosol sources, transport, and sinks are simulated, and aerosol direct radiative effects are assessed over the Indian Ocean for the Indian Ocean Experiment (INDOEX) Intensive Field Phase during January to March 1999 using the Laboratoire de Me´te´orologie Dynamique (LMDZT) general circulation model. The model reproduces the latitudinal gradient in aerosol mass concentration and optical depth (AOD). The model-predicted aerosol concentrations and AODs agree reasonably well with measurements but are systematically underestimated during high-pollution episodes, especially in the month of March. The largest aerosol loads are found over southwestern China, the Bay of Bengal, and the Indian subcontinent. Aerosol emissions from the Indian subcontinent are transported into the Indian Ocean through either the west coast or the east coast of India. Over the INDOEX region, carbonaceous aerosols are the largest contributor to the estimated AOD, followed by sulfate, dust, sea salt, and fly ash. During the northeast winter monsoon, natural and anthropogenic aerosols reduce the solar flux reaching the surface by 25 W m�2, leading to 10–15% less insolation at the surface. A doubling of black carbon (BC) emissions from Asia results in an aerosol single-scattering albedo that is much smaller than in situ measurements, reflecting the fact that BC emissions are not underestimated in proportion to other (mostly scattering) aerosol types. South Asia is the dominant contributor to sulfate aerosols over the INDOEX region and accounts for 60–70% of the AOD by sulfate. It is also an important but not the dominant contributor to carbonaceous aerosols over the INDOEX region with a contribution of less than 40% to the AOD by this aerosol species. The presence of elevated plumes brings significant quantities of aerosols to the Indian Ocean that are generated over Africa and Southeast and east Asia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft’s cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.