973 resultados para Froude scaling
Resumo:
Predictions of twenty-first century sea level change show strong regional variation. Regional sea level change observed by satellite altimetry since 1993 is also not spatially homogenous. By comparison with historical and pre-industrial control simulations using the atmosphere–ocean general circulation models (AOGCMs) of the CMIP5 project, we conclude that the observed pattern is generally dominated by unforced (internal generated) variability, although some regions, especially in the Southern Ocean, may already show an externally forced response. Simulated unforced variability cannot explain the observed trends in the tropical Pacific, but we suggest that this is due to inadequate simulation of variability by CMIP5 AOGCMs, rather than evidence of anthropogenic change. We apply the method of pattern scaling to projections of sea level change and show that it gives accurate estimates of future local sea level change in response to anthropogenic forcing as simulated by the AOGCMs under RCP scenarios, implying that the pattern will remain stable in future decades. We note, however, that use of a single integration to evaluate the performance of the pattern-scaling method tends to exaggerate its accuracy. We find that ocean volume mean temperature is generally a better predictor than global mean surface temperature of the magnitude of sea level change, and that the pattern is very similar under the different RCPs for a given model. We determine that the forced signal will be detectable above the noise of unforced internal variability within the next decade globally and may already be detectable in the tropical Atlantic.
Resumo:
We study the scaling properties and Kraichnan–Leith–Batchelor (KLB) theory of forced inverse cascades in generalized two-dimensional (2D) fluids (α-turbulence models) simulated at resolution 8192x8192. We consider α=1 (surface quasigeostrophic flow), α=2 (2D Euler flow) and α=3. The forcing scale is well resolved, a direct cascade is present and there is no large-scale dissipation. Coherent vortices spanning a range of sizes, most larger than the forcing scale, are present for both α=1 and α=2. The active scalar field for α=3 contains comparatively few and small vortices. The energy spectral slopes in the inverse cascade are steeper than the KLB prediction −(7−α)/3 in all three systems. Since we stop the simulations well before the cascades have reached the domain scale, vortex formation and spectral steepening are not due to condensation effects; nor are they caused by large-scale dissipation, which is absent. One- and two-point p.d.f.s, hyperflatness factors and structure functions indicate that the inverse cascades are intermittent and non-Gaussian over much of the inertial range for α=1 and α=2, while the α=3 inverse cascade is much closer to Gaussian and non-intermittent. For α=3 the steep spectrum is close to that associated with enstrophy equipartition. Continuous wavelet analysis shows approximate KLB scaling ℰ(k)∝k−2 (α=1) and ℰ(k)∝k−5/3 (α=2) in the interstitial regions between the coherent vortices. Our results demonstrate that coherent vortex formation (α=1 and α=2) and non-realizability (α=3) cause 2D inverse cascades to deviate from the KLB predictions, but that the flow between the vortices exhibits KLB scaling and non-intermittent statistics for α=1 and α=2.
Resumo:
This paper will present and discuss the results of an empirical study on perception of quality in interpretation carried out on a sample of 286 interpreters across five continents. Since the 1980’s the field of Interpreting Studies has been witnessing an ever growing interest in the issue of quality in interpretation both in academia and in professional circles, but research undertaken so far is surprisingly lacking in methodological rigour. This survey is an attempt to revise previous studies on interpreters’ perception of quality through the implementation of new Information Technology which allowed us to administer a traditional research tool such as a questionnaire, in a highly innovative way; i.e., through the World Wide Web. Using multidimensional scaling, a perceptual map based upon the results of the manner in which interpreters ranked a list of linguistic and nonlinguistic criteria according to their perception of importance in the interpretative process,was devised.
Resumo:
Research evaluating perceptual responses to music has identified many structural features as correlates that might be incorporated in computer music systems for affectively charged algorithmic composition and/or expressive music performance. In order to investigate the possible integration of isolated musical features to such a system, a discrete feature known to correlate some with emotional responses – rhythmic density – was selected from a literature review and incorporated into a prototype system. This system produces variation in rhythm density via a transformative process. A stimulus set created using this system was then subjected to a perceptual evaluation. Pairwise comparisons were used to scale differences between 48 stimuli. Listener responses were analysed with Multidimensional scaling (MDS). The 2-Dimensional solution was then rotated to place the stimuli with the largest range of variation across the horizontal plane. Stimuli with variation in rhythmic density were placed further from the source material than stimuli that were generated by random permutation. This, combined with the striking similarity between the MDS scaling and that of the 2-dimensional emotional model used by some affective algorithmic composition systems, suggests that isolated musical feature manipulation can now be used to parametrically control affectively charged automated composition in a larger system.
Jersey milk suitability for Cheddar cheese production: process, yield, quality and financial impacts
Resumo:
The aim of this study was to first evaluate the benefits of including Jersey milk into Holstein-Friesian milk on the Cheddar cheese making process and secondly, using the data gathered, identify the effects and relative importance of a wide range of milk components on milk coagulation properties and the cheese making process. Blending Jersey and Holstein-Friesian milk led to quadratic trends on the size of casein micelle and fat globule and on coagulation properties. However this was not found to affect the cheese making process. Including Jersey milk was found, on a pilot scale, to increase cheese yield (up to + 35 %) but it did not affect cheese quality, which was defined as compliance with the legal requirements of cheese composition, cheese texture, colour and grading scores. Profitability increased linearly with the inclusion of Jersey milk (up to 11.18 p£ L-1 of milk). The commercial trials supported the pilot plant findings, demonstrating that including Jersey milk increased cheese yield without having a negative impact on cheese quality, despite the inherent challenges of scaling up such a process commercially. The successful use of a large array of milk components to model the cheese making process challenged the commonly accepted view that fat, protein and casein content and protein to fat ratio are the main contributors to the cheese making process as other components such as the size of casein micelle and fat globule were found to also play a key role with small casein micelle and large fat globule reducing coagulation time, improving curd firmness, fat recovery and influencing cheese moisture and fat content. The findings of this thesis indicated that milk suitability for Cheddar making could be improved by the inclusion of Jersey milk and that more compositional factors need to be taken into account when judging milk suitability.
Resumo:
Dynamic global vegetation models (DGVMs) typically rely on plant functional types (PFTs), which are assigned distinct environmental tolerances and replace one another progressively along environmental gradients. Fixed values of traits are assigned to each PFT; modelled trait variation along gradients is thus driven by PFT replacement. But empirical studies have revealed "universal" scaling relationships (quantitative trait variations with climate that are similar within and between species, PFTs and communities); and continuous, adaptive trait variation has been proposed to replace PFTs as the basis for next-generation DGVMs. Here we analyse quantitative leaf-trait variation on long temperature and moisture gradients in China with a view to understanding the relative importance of PFT replacement vs. continuous adaptive variation within PFTs. Leaf area (LA), specific leaf area (SLA), leaf dry matter content (LDMC) and nitrogen content of dry matter were measured on all species at 80 sites ranging from temperate to tropical climates and from dense forests to deserts. Chlorophyll fluorescence traits and carbon, phosphorus and potassium contents were measured at 47 sites. Generalized linear models were used to relate log-transformed trait values to growing-season temperature and moisture indices, with or without PFT identity as a predictor, and to test for differences in trait responses among PFTs. Continuous trait variation was found to be ubiquitous. Responses to moisture availability were generally similar within and between PFTs, but biophysical traits (LA, SLA and LDMC) of forbs and grasses responded differently from woody plants. SLA and LDMC responses to temperature were dominated by the prevalence of evergreen PFTs with thick, dense leaves at the warm end of the gradient. Nutrient (N, P and K) responses to climate gradients were generally similar within all PFTs. Area-based nutrients generally declined with moisture; Narea and Karea declined with temperature, but Parea increased with temperature. Although the adaptive nature of many of these trait-climate relationships is understood qualitatively, a key challenge for modelling is to predict them quantitatively. Models must take into account that community-level responses to climatic gradients can be influenced by shifts in PFT composition, such as the replacement of deciduous by evergreen trees, which may run either parallel or counter to trait variation within PFTs. The importance of PFT shifts varies among traits, being important for biophysical traits but less so for physiological and chemical traits. Finally, models should take account of the diversity of trait values that is found in all sites and PFTs, representing the "pool" of variation that is locally available for the natural adaptation of ecosystem function to environmental change.
Resumo:
The fundamental features of growth may be universal, because growth trajectories of most animals are very similar, but a unified mechanistic theory of growth remains elusive. Still needed is a synthetic explanation for how and why growth rates vary as body size changes, both within individuals over their ontogeny and between populations and species over their evolution. Here we use Bertalanffy growth equations to characterize growth of ray-finned fishes in terms of two parameters, the growth rate coefficient, K, and final body mass, m∞. We derive two alternative empirically testable hypotheses and test them by analyzing data from FishBase. Across 576 species, which vary in size at maturity by almost nine orders of magnitude, K scaled as m_∞^(-0.23). This supports our first hypothesis that growth rate scales as m_∞^(-0.25) as predicted by metabolic scaling theory; it implies that species which grow to larger mature sizes grow faster as juveniles. Within fish species, however, K scaled as m_∞^(-0.35). This supports our second hypothesis which predicts that growth rate scales as m_∞^(-0.33) when all juveniles grow at the same rate. The unexpected disparity between across- and within-species scaling challenges existing theoretical interpretations. We suggest that the similar ontogenetic programs of closely related populations constrain growth to m_∞^(-0.33) scaling, but as species diverge over evolutionary time they evolve the near-optimal m_∞^(-0.25) scaling predicted by metabolic scaling theory. Our findings have important practical implications because fish supply essential protein in human diets, and sustainable yields from wild harvests and aquaculture depend on growth rates.
Resumo:
The purpose of this paper is to investigate several analytical methods of solving first passage (FP) problem for the Rouse model, a simplest model of a polymer chain. We show that this problem has to be treated as a multi-dimensional Kramers' problem, which presents rich and unexpected behavior. We first perform direct and forward-flux sampling (FFS) simulations, and measure the mean first-passage time $\tau(z)$ for the free end to reach a certain distance $z$ away from the origin. The results show that the mean FP time is getting faster if the Rouse chain is represented by more beads. Two scaling regimes of $\tau(z)$ are observed, with transition between them varying as a function of chain length. We use these simulations results to test two theoretical approaches. One is a well known asymptotic theory valid in the limit of zero temperature. We show that this limit corresponds to fully extended chain when each chain segment is stretched, which is not particularly realistic. A new theory based on the well known Freidlin-Wentzell theory is proposed, where dynamics is projected onto the minimal action path. The new theory predicts both scaling regimes correctly, but fails to get the correct numerical prefactor in the first regime. Combining our theory with the FFS simulations lead us to a simple analytical expression valid for all extensions and chain lengths. One of the applications of polymer FP problem occurs in the context of branched polymer rheology. In this paper, we consider the arm-retraction mechanism in the tube model, which maps exactly on the model we have solved. The results are compared to the Milner-McLeish theory without constraint release, which is found to overestimate FP time by a factor of 10 or more.
Resumo:
Structural differences among models account for much of the uncertainty in projected climate changes, at least until the mid-twenty-first century. Recent observations encompass too limited a range of climate variability to provide a robust test of the ability to simulate climate changes. Past climate changes provide a unique opportunity for out-of-sample evaluation of model performance. Palaeo-evaluation has shown that the large-scale changes seen in twenty-first-century projections, including enhanced land–sea temperature contrast, latitudinal amplification, changes in temperature seasonality and scaling of precipitation with temperature, are likely to be realistic. Although models generally simulate changes in large-scale circulation sufficiently well to shift regional climates in the right direction, they often do not predict the correct magnitude of these changes. Differences in performance are only weakly related to modern-day biases or climate sensitivity, and more sophisticated models are not better at simulating climate changes. Although models correctly capture the broad patterns of climate change, improvements are required to produce reliable regional projections.
Resumo:
The strong trend toward nanosatellites creates new challenges in terms of thermal balance control. The thermal balance of a satellite is determined by the heat dissipation in its subsystems and by the thermal connections between them. As satellites become smaller, heat dissipation in their subsystems tends to decrease and thermal connectivity scales down with dimension. However, these two terms do not necessarily scale in the same way, and so the thermal balance may alter and the temperature of subsystems may reach undesired levels. This paper focuses on low-Earth-orbit satellites. We constructed a generalized lumped thermal model that combines a generalized low-Earth-orbit satellite configuration with scaling trends in subsystem heat dissipation and thermal connectivity. Using satellite mass as a scaling parameter, we show that subsystems do not become thermally critical by scaling mass alone.
Resumo:
TIGGE was a major component of the THORPEX (The Observing System Research and Predictability Experiment) research program, whose aim is to accelerate improvements in forecasting high-impact weather. By providing ensemble prediction data from leading operational forecast centers, TIGGE has enhanced collaboration between the research and operational meteorological communities and enabled research studies on a wide range of topics. The paper covers the objective evaluation of the TIGGE data. For a range of forecast parameters, it is shown to be beneficial to combine ensembles from several data providers in a Multi-model Grand Ensemble. Alternative methods to correct systematic errors, including the use of reforecast data, are also discussed. TIGGE data have been used for a range of research studies on predictability and dynamical processes. Tropical cyclones are the most destructive weather systems in the world, and are a focus of multi-model ensemble research. Their extra-tropical transition also has a major impact on skill of mid-latitude forecasts. We also review how TIGGE has added to our understanding of the dynamics of extra-tropical cyclones and storm tracks. Although TIGGE is a research project, it has proved invaluable for the development of products for future operational forecasting. Examples include the forecasting of tropical cyclone tracks, heavy rainfall, strong winds, and flood prediction through coupling hydrological models to ensembles. Finally the paper considers the legacy of TIGGE. We discuss the priorities and key issues in predictability and ensemble forecasting, including the new opportunities of convective-scale ensembles, links with ensemble data assimilation methods, and extension of the range of useful forecast skill.
Resumo:
This thesis considers Participatory Crop Improvement (PCI) methodologies and examines the reasons behind their continued contestation and limited mainstreaming in conventional modes of crop improvement research within National Agricultural Research Systems (NARS). In particular, it traces the experiences of a long-established research network with over 20 years of experience in developing and implementing PCI methods across South Asia, and specifically considers its engagement with the Indian NARS and associated state-level agricultural research systems. In order to address the issues surrounding PCI institutionalisation processes, a novel conceptual framework was derived from a synthesis of the literatures on Strategic Niche Management (SNM) and Learning-based Development Approaches (LBDA) to analyse the socio-technical processes and structures which constitute the PCI ‘niche’ and NARS ‘regime’. In examining the niche and regime according to their socio-technical characteristics, the framework provides explanatory power for understanding the nature of their interactions and the opportunities and barriers that exist with respect to the translation of lessons and ideas between niche and regime organisations. The research shows that in trying to institutionalise PCI methods and principles within NARS in the Indian context, PCI proponents have encountered a number of constraints related to the rigid and hierarchical structure of the regime organisations; the contractual mode of most conventional research, which inhibits collaboration with a wider group of stakeholders; and the time-limited nature of PCI projects themselves, which limits investment and hinders scaling up of the innovations. It also reveals that while the niche projects may be able to induce a ‘weak’ form of PCI institutionalisation within the Indian NARS by helping to alter their institutional culture to be more supportive of participatory plant breeding approaches and future collaboration with PCI researchers, a ‘strong’ form of PCI institutionalisation, in which NARS organisations adopt participatory methodologies to address all their crop improvement agenda, is likely to remain outside of the capacity of PCI development projects to deliver.
Resumo:
This study uses large-eddy simulation to investigate the structure of the ocean surface boundary layer (OSBL) in the presence of Langmuir turbulence and stabilizing surface heat fluxes. The OSBL consists of a weakly stratified layer, despite a surface heat flux, above a stratified thermocline. The weakly stratified (mixed) layer is maintained by a combination of a turbulent heat flux produced by the wave-driven Stokes drift and downgradient turbulent diffusion. The scaling of turbulence statistics, such as dissipation and vertical velocity variance, is only affected by the surface heat flux through changes in the mixed layer depth. Diagnostic models are proposed for the equilibrium boundary layer and mixed layer depths in the presence of surface heating. The models are a function of the initial mixed layer depth before heating is imposed and the Langmuir stability length. In the presence of radiative heating, the models are extended to account for the depth profile of the heating.
Resumo:
Observers generally fail to recover three-dimensional shape accurately from binocular disparity. Typically, depth is overestimated at near distances and underestimated at far distances [Johnston, E. B. (1991). Systematic distortions of shape from stereopsis. Vision Research, 31, 1351–1360]. A simple prediction from this is that disparity-defined objects should appear to expand in depth when moving towards the observer, and compress in depth when moving away. However, additional information is provided when an object moves from which 3D Euclidean shape can be recovered, be this through the addition of structure from motion information [Richards, W. (1985). Structure from stereo and motion. Journal of the Optical Society of America A, 2, 343–349], or the use of non-generic strategies [Todd, J. T., & Norman, J. F. (2003). The visual perception of 3-D shape from multiple cues: Are observers capable of perceiving metric structure? Perception and Psychophysics, 65, 31–47]. Here, we investigated shape constancy for objects moving in depth. We found that to be perceived as constant in shape, objects needed to contract in depth when moving toward the observer, and expand in depth when moving away, countering the effects of incorrect distance scaling (Johnston, 1991). This is a striking example of the failure of shape con- stancy, but one that is predicted if observers neither accurately estimate object distance in order to recover Euclidean shape, nor are able to base their responses on a simpler processing strategy.
Resumo:
Precipitation is expected to respond differently to various drivers of anthropogenic climate change. We present the first results from the Precipitation Driver and Response Model Intercomparison Project (PDRMIP), where nine global climate models have perturbed CO2, CH4, black carbon, sulfate, and solar insolation. We divide the resulting changes to global mean and regional precipitation into fast responses that scale with changes in atmospheric absorption and slow responses scaling with surface temperature change. While the overall features are broadly similar between models, we find significant regional intermodel variability, especially over land. Black carbon stands out as a component that may cause significant model diversity in predicted precipitation change. Processes linked to atmospheric absorption are less consistently modeled than those linked to top-of-atmosphere radiative forcing. We identify a number of land regions where the model ensemble consistently predicts that fast precipitation responses to climate perturbations dominate over the slow, temperature-driven responses.