978 resultados para Factorial experiment designs
Resumo:
The self-consistent field theory (SCFT) prediction for the compression force between two semi-dilute polymer brushes is compared to the benchmark experiments of Taunton et al. [Nature, 1988, 332, 712]. The comparison is done with previously established parameters, and without any fitting parameters whatsoever. The SCFT provides a significant quantitative improvement over the classical strong-stretching theory (SST), yielding excellent quantitative agreement with the experiment. Contrary to earlier suggestions, chain fluctuations cannot be ignored for normal experimental conditions. Although the analytical expressions of SST provide invaluable aids to understanding the qualitative behavior of polymeric brushes, the numerical SCFT is necessary in order to provide quantitatively accurate predictions.
Resumo:
The paper is concerned with the uniformization of a system of affine recurrence equations. This transformation is used in the design (or compilation) of highly parallel embedded systems (VLSI systolic arrays, signal processing filters, etc.). We present and implement an automatic system to achieve uniformization of systems of affine recurrence equations. We unify the results from many earlier papers, develop some theoretical extensions, and then propose effective uniformization algorithms. Our results can be used in any high level synthesis tool based on polyhedral representation of nested loop computations.
Resumo:
We present an extensive thermodynamic analysis of a hysteresis experiment performed on a simplified yet Earth-like climate model. We slowly vary the solar constant by 20% around the present value and detect that for a large range of values of the solar constant the realization of snowball or of regular climate conditions depends on the history of the system. Using recent results on the global climate thermodynamics, we show that the two regimes feature radically different properties. The efficiency of the climate machine monotonically increases with decreasing solar constant in present climate conditions, whereas the opposite takes place in snowball conditions. Instead, entropy production is monotonically increasing with the solar constant in both branches of climate conditions, and its value is about four times larger in the warm branch than in the corresponding cold state. Finally, the degree of irreversibility of the system, measured as the fraction of excess entropy production due to irreversible heat transport processes, is much higher in the warm climate conditions, with an explosive growth in the upper range of the considered values of solar constants. Whereas in the cold climate regime a dominating role is played by changes in the meridional albedo contrast, in the warm climate regime changes in the intensity of latent heat fluxes are crucial for determining the observed properties. This substantiates the importance of addressing correctly the variations of the hydrological cycle in a changing climate. An interpretation of the climate transitions at the tipping points based upon macro-scale thermodynamic properties is also proposed. Our results support the adoption of a new generation of diagnostic tools based on the second law of thermodynamics for auditing climate models and outline a set of parametrizations to be used in conceptual and intermediate-complexity models or for the reconstruction of the past climate conditions. Copyright © 2010 Royal Meteorological Society
Resumo:
The objectives of the present study were 1) to evaluate the effects of supplemental fat and ME intake on plasma concentrations of glucagon-like peptide-1 (GLP-1), cholecystokinin (CCK), glucose-dependent insulinotropic polypeptide, ghrelin, and oxyntomodulin; and 2) to determine the association of these peptides with DMI and the hypothalamic concentration of mRNA for the following neuropeptides: neuropeptide Y (NPY), agouti-related peptide (AgRP), and proopiomelanocortin (POMC). In a completely randomized block design with a 2 x 2 factorial arrangement of treatments, 32 pens with 2 wethers each were restricted-fed (2.45 Mcal/lamb per day) or offered diets ad libitum (n = 16) with or without 6% supplemental fat (n = 16) for a period of 30 d. Dry matter intake was measured daily. On d 8, 15, 22, and 29, BW was measured before feeding, and 6 h after feeding, blood samples were collected for plasma measurement of insulin, GLP-1, CCK, ghrelin, glucose-dependent insulinotropic polypeptide, oxyntomodulin, glucose, and NEFA concentrations. On d 29, blood was collected 30 min before feeding for the same hormone and metabolite analyses. At the end of the experiment, wethers were slaughtered and the hypothalami were collected to measure concentrations of NPY, AgRP, and POMC mRNA. Offering feed ad libitum (resulting in greater ME intake) increased plasma insulin and NEFA concentrations (P = 0.02 and 0.02, respectively) and decreased hypothalamic mRNA expression of NPY and AgRP (P = 0.07 and 0.02, respectively) compared with the restricted-fed wethers. There was a trend for the addition of dietary fat to decrease DMI (P = 0.12). Addition of dietary fat decreased insulin and glucose concentrations (P < 0.05 and 0.01, respectively) and tended to increase hypothalamic mRNA concentrations for NPY and AgRP (P = 0.07 and 0.11, respectively). Plasma GLP-1 and CCK concentrations increased in wethers offered feed ad libitum compared with restricted-fed wethers, but the response was greater when wethers were offered feed ad libitum and had supplemental fat in the diet (fat x intake interaction, P = 0.04). The prefeeding plasma ghrelin concentration was greater in restricted-fed wethers compared with those offered feed ad libitum, but the concentrations were similar 6 h after feeding (intake x time interaction, P < 0.01). Supplemental dietary fat did not affect (P = 0.22) plasma ghrelin concentration. We conclude that insulin, ghrelin, CCK, and GLP-1 may regulate DMI in sheep by regulating the hypothalamic gene expression of NPY, AgRP, and POMC.
Resumo:
The North Atlantic Marine Boundary Layer Experiment (NAMBLEX), involving over 50 scientists from 12 institutions, took place at Mace Head, Ireland (53.32° N, 9.90° W), between 23 July and 4 September 2002. A wide range of state-of-the-art instrumentation enabled detailed measurements of the boundary layer structure and atmospheric composition in the gas and aerosol phase to be made, providing one of the most comprehensive in situ studies of the marine boundary layer to date. This overview paper describes the aims of the NAMBLEX project in the context of previous field campaigns in the Marine Boundary Layer (MBL), the overall layout of the site, a summary of the instrumentation deployed, the temporal coverage of the measurement data, and the numerical models used to interpret the field data. Measurements of some trace species were made for the first time during the campaign, which was characterised by predominantly clean air of marine origin, but more polluted air with higher levels of NOx originating from continental regions was also experienced. This paper provides a summary of the meteorological measurements and Planetary Boundary Layer (PBL) structure measurements, presents time series of some of the longer-lived trace species (O3, CO, H2, DMS, CH4, NMHC, NOx, NOy, PAN) and summarises measurements of other species that are described in more detail in other papers within this special issue, namely oxygenated VOCs, HCHO, peroxides, organo-halogenated species, a range of shorter lived halogen species (I2, OIO, IO, BrO), NO3 radicals, photolysis frequencies, the free radicals OH, HO2 and (HO2+Σ RO2), as well as a summary of the aerosol measurements. NAMBLEX was supported by measurements made in the vicinity of Mace Head using the NERC Dornier-228 aircraft. Using ECMWF wind-fields, calculations were made of the air-mass trajectories arriving at Mace Head during NAMBLEX, and were analysed together with both meteorological and trace-gas measurements. In this paper a chemical climatology for the duration of the campaign is presented to interpret the distribution of air-mass origins and emission sources, and to provide a convenient framework of air-mass classification that is used by other papers in this issue for the interpretation of observed variability in levels of trace gases and aerosols.
Resumo:
The Cambridge Tropospheric Trajectory model of Chemistry and Transport (CiTTyCAT), a Lagrangian chemistry model, has been evaluated using atmospheric chemical measurements collected during the East Atlantic Summer Experiment 1996 (EASE '96). This field campaign was part of the UK Natural Environment Research Council's (NERC) Atmospheric Chemistry Studies in the Oceanic Environment (ACSOE) programme, conducted at Mace Head, Republic of Ireland, during July and August 1996. The model includes a description of gas-phase tropospheric chemistry, and simple parameterisations for surface deposition, mixing from the free troposphere and emissions. The model generally compares well with the measurements and is used to study the production and loss of O3 under a variety of conditions. The mean difference between the hourly O3 concentrations calculated by the model and those measured is 0.6 ppbv with a standard deviation of 8.7 ppbv. Three specific air-flow regimes were identified during the campaign – westerly, anticyclonic (easterly) and south westerly. The westerly flow is typical of background conditions for Mace Head. However, on some occasions there was evidence of long-range transport of pollutants from North America. In periods of anticyclonic flow, air parcels had collected emissions of NOx and VOCs immediately before arriving at Mace Head, leading to O3 production. The level of calculated O3 depends critically on the precise details of the trajectory, and hence on the emissions into the air parcel. In several periods of south westerly flow, low concentrations of O3 were measured which were consistent with deposition and photochemical destruction inside the tropical marine boundary layer.
Resumo:
Infrared optical-multilayer filters and materials were exposed to the space environment of low Earth orbit on LDEF. This paper summarizes the effects of that environment on the physical and optical properties of the filters and materials flown.
Resumo:
Improving methodology for Phase I dose-finding studies is currently of great interest in pharmaceutical and medical research. This article discusses the current atmosphere and attitude towards adaptive designs and focuses on the influence of Bayesian approaches.
Resumo:
Experimentally and theoretically determined infrared spectra are reported for a series of straight-chain perfluorocarbons: C2F6, C3F8, C4F10, C5F12, C6F14, and C8F18. Theoretical spectra were determined using both density functional (DFT) and ab initio methods. Radiative efficiencies (REs) were determined using the method of Pinnock et al. (1995) and combined with atmospheric lifetimes from the literature to determine global warming potentials (GWPs). Theoretically determined absorption cross sections were within 10% of experimentally determined values. Despite being much less computationally expensive, DFT calculations were generally found to perform better than ab initio methods. There is a strong wavenumber dependence of radiative forcing in the region of the fundamental C-F vibration, and small differences in wavelength between band positions determined by theory and experiment have a significant impact on the REs. We apply an empirical correction to the theoretical spectra and then test this correction on a number of branched chain and cyclic perfluoroalkanes. We then compute absorption cross sections, REs, and GWPs for an additional set of perfluoroalkenes.
Resumo:
This paper presents the evaluation in power consumption of a clocking technique for pipelined designs. The technique shows a dynamic power consumption saving of around 30% over a conventional global clocking mechanism. The results were obtained from a series of experiments of a systolic circuit implemented in Virtex-II devices. The conversion from a global-clocked pipelined design to the proposed technique is straightforward, preserving the original datapath design. The savings can be used immediately either as a power reduction benefit or to increase the frequency of operation of a design for the same power consumption.
Resumo:
This paper presents a simple clocking technique to migrate classical synchronous pipelined designs to a synchronous functional-equivalent alternative system in the context of FPGAs. When the new pipelined design runs at the same throughput of the original design, around 30% better mW/MHz ratio was observed in Virtex-based FPGA circuits. The evaluation is done using a simple but representative and practical systolic design as an example. The technique in essence is a simple replacement of the clocking mechanism for the pipe-storage elements; however no extra design effort is needed. The results show that the proposed technique allows immediate power and area-time savings of existing designs rather than exploring potential benefits by a new logic design to the problem using the classic pipeline clocking mechanism.
Resumo:
A time-dependent climate-change experiment with a coupled ocean–atmosphere general circulation model has been used to study changes in the occurrence of drought in summer in southern Europe and central North America. In both regions, precipitation and soil moisture are reduced in a climate of greater atmospheric carbon dioxide. A detailed investigation of the hydrology of the model shows that the drying of the soil comes about through an increase in evaporation in winter and spring, caused by higher temperatures and reduced snow cover, and a decrease in the net input of water in summer. Evaporation is reduced in summer because of the drier soil, but the reduction in precipitation is larger. Three extreme statistics are used to define drought, namely the frequency of low summer precipitation, the occurrence of long dry spells, and the probability of dry soil. The last of these is arguably of the greatest practical importance, but since it is based on soil moisture, of which there are very few observations, the authors’ simulation of it has the least confidence. Furthermore, long time series for daily observed precipitation are not readily available from a sufficient number of stations to enable a thorough evaluation of the model simulation, especially for the frequency of long dry spells, and this increases the systematic uncertainty of the model predictions. All three drought statistics show marked increases owing to the sensitivity of extreme statistics to changes in their distributions. However, the greater likelihood of long dry spells is caused by a tendency in the character of daily rainfall toward fewer events, rather than by the reduction in mean precipitation. The results should not be taken as firm predictions because extreme statistics for small regions cannot be calculated reliably from the output of the current generation of GCMs, but they point to the possibility of large increases in the severity of drought conditions as a consequence of climate change caused by increased CO2.
Resumo:
Diabetes like many diseases and biological processes is not mono-causal. On the one hand multifactorial studies with complex experimental design are required for its comprehensive analysis. On the other hand, the data from these studies often include a substantial amount of redundancy such as proteins that are typically represented by a multitude of peptides. Coping simultaneously with both complexities (experimental and technological) makes data analysis a challenge for Bioinformatics.
Resumo:
A study or experiment can be described as sequential if its design includes one or more interim analyses at which it is possible to stop the study, having reached a definitive conclusion concerning the primary question of interest. The potential of the sequential study to terminate earlier than the equivalent fixed sample size study means that, typically, there are ethical and economic advantages to be gained from using a sequential design. These advantages have secured a place for the methodology in the conduct of many clinical trials of novel therapies. Recently, there has been increasing interest in pharmacogenetics: the study of how DNA variation in the human genome affects the safety and efficacy of drugs. The potential for using sequential methodology in pharmacogenetic studies is considered and the conduct of candidate gene association studies, family-based designs and genome-wide association studies within the sequential setting is explored. The objective is to provide a unified framework for the conduct of these types of studies as sequential designs and hence allow experimenters to consider using sequential methodology in their future pharmacogenetic studies.