894 resultados para test case optimization


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Investigating the interplay between continental weathering and erosion, climate, and atmospheric CO2 concentrations is significant in understanding the mechanisms that force the Cenozoic global cooling and predicting the future climatic and environmental response to increasing temperature and CO2 levels. The Miocene represents an ideal test case as it encompasses two distinct extreme climate periods, the Miocene Climatic Optimum (MCO) with the warmest time since 35 Ma in Earth's history and the transition to the Late Cenozoic icehouse mode with the establishment of the east Antarctic ice sheet. However the precise role of continental weathering during this period of major climate change is poorly understood. Here we show changes in the rates of Miocene continental chemical weathering and physical erosion, which we tracked using the chemical index of alteration ( CIA) and mass accumulation rate ( MAR) respectively from Ocean Drilling Program (ODP) Site 1146 and 1148 in the South China Sea. We found significantly increased CIA values and terrigenous MARs during the MCO (ca. 17-15 Ma) compared to earlier and later periods suggests extreme continental weathering and erosion at that time. Similar high rates were revealed in the early-middle Miocene of Asia, the European Alps, and offshore Angola. This suggests that rapid sedimentation during the MCO was a global erosion event triggered by climate rather than regional tectonic activity. The close coherence of our records with high temperature, strong precipitation, increased burial of organic carbon and elevated atmospheric CO2 concentration during the MCO argues for long-term, close coupling between continental silicate weathering, erosion, climate and atmospheric CO2 during the Miocene. Citation: Wan, S., W. M. Kurschner, P. D. Clift, A. Li, and T. Li (2009), Extreme weathering/ erosion during the Miocene Climatic Optimum: Evidence from sediment record in the South China Sea, Geophys. Res. Lett., 36, L19706, doi: 10.1029/2009GL040279.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Comfort is, in essence, satisfaction with the environment, and with respect to the indoor environment it is primarily satisfaction with the thermal conditions and air quality. Improving comfort has social, health and economic benefits, and is more financially significant than any other building cost. Despite this, comfort is not strictly managed throughout the building lifecycle. This is mainly due to the lack of an appropriate system to adequately manage comfort knowledge through the construction process into operation. Previous proposals to improve knowledge management have not been successfully adopted by the construction industry. To address this, the BabySteps approach was devised. BabySteps is an approach, proposed by this research, which states that for an innovation to be adopted into the industry it must be implementable through a number of small changes. This research proposes that improving the management of comfort knowledge will improve comfort. ComMet is a new methodology proposed by this research that manages comfort knowledge. It enables comfort knowledge to be captured, stored and accessed throughout the building life-cycle and so allowing it to be re-used in future stages of the building project and in future projects. It does this using the following: Comfort Performances – These are simplified numerical representations of the comfort of the indoor environment. Comfort Performances quantify the comfort at each stage of the building life-cycle using standard comfort metrics. Comfort Ratings - These are a means of classifying the comfort conditions of the indoor environment according to an appropriate standard. Comfort Ratings are generated by comparing different Comfort Performances. Comfort Ratings provide additional information relating to the comfort conditions of the indoor environment, which is not readily determined from the individual Comfort Performances. Comfort History – This is a continuous descriptive record of the comfort throughout the project, with a focus on documenting the items and activities, proposed and implemented, which could potentially affect comfort. Each aspect of the Comfort History is linked to the relevant comfort entity it references. These three components create a comprehensive record of the comfort throughout the building lifecycle. They are then stored and made available in a common format in a central location which allows them to be re-used ad infinitum. The LCMS System was developed to implement the ComMet methodology. It uses current and emerging technologies to capture, store and allow easy access to comfort knowledge as specified by ComMet. LCMS is an IT system that is a combination of the following six components: Building Standards; Modelling & Simulation; Physical Measurement through the specially developed Egg-Whisk (Wireless Sensor) Network; Data Manipulation; Information Recording; Knowledge Storage and Access.Results from a test case application of the LCMS system - an existing office room at a research facility - highlighted that while some aspects of comfort were being maintained, the building’s environment was not in compliance with the acceptable levels as stipulated by the relevant building standards. The implementation of ComMet, through LCMS, demonstrates how comfort, typically only considered during early design, can be measured and managed appropriately through systematic application of the methodology as means of ensuring a healthy internal environment in the building.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

© 2013 The Association for the Study of Animal Behaviour.Social complexity, often estimated by group size, is seen as driving the complexity of vocal signals, but its relation to olfactory signals, which arguably arose to function in nonsocial realms, remains underappreciated. That olfactory signals also may mediate within-group interaction, vary with social complexity and promote social cohesion underscores a potentially crucial link with sociality. To examine that link, we integrated chemical and behavioural analyses to ask whether olfactory signals facilitate reproductive coordination in a strepsirrhine primate, the Coquerel's sifaka, Propithecus coquereli. Belonging to a clade comprising primarily solitary, nocturnal species, the diurnal, group-living sifaka represents an interesting test case. Convergent with diurnal, group-living lemurids, sifakas expressed chemically rich scent signals, consistent with the social complexity hypothesis for communication. These signals minimally encoded the sex of the signaller and varied with female reproductive state. Likewise, sex and female fertility were reflected in within-group scent investigation, scent marking and overmarking. We further asked whether, within breeding pairs, the stability or quality of the pair's bond influences the composition of glandular signals and patterns of investigatory or scent-marking behaviour. Indeed, reproductively successful pairs tended to show greater similarity in their scent signals than did reproductively unsuccessful pairs, potentially through chemical convergence. Moreover, scent marking was temporally coordinated within breeding pairs and was influenced by past reproductive success. That olfactory signalling reflects social bondedness or reproductive history lends support to recent suggestions that the quality of relationships may be a more valuable proxy than group size for estimating social complexity. We suggest that olfactory signalling in sifakas is more complex than previously recognized and, as in other socially integrated species, can be a crucial mechanism for promoting group cohesion and maintaining social bonds. Thus, the evolution of sociality may well be reflected in the complexity of olfactory signalling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The high energetic costs of building and maintaining large brains are thought to constrain encephalization. The 'expensive-tissue hypothesis' (ETH) proposes that primates (especially humans) overcame this constraint through reduction of another metabolically expensive tissue, the gastrointestinal tract. Small guts characterize animals specializing on easily digestible diets. Thus, the hypothesis may be tested via the relationship between brain size and diet quality. Platyrrhine primates present an interesting test case, as they are more variably encephalized than other extant primate clades (excluding Hominoidea). We find a high degree of phylogenetic signal in the data for diet quality, endocranial volume and body size. Controlling for phylogenetic effects, we find no significant correlation between relative diet quality and relative endocranial volume. Thus, diet quality fails to account for differences in platyrrhine encephalization. One taxon, in particular, Brachyteles, violates predictions made by ETH in having a large brain and low-quality diet. Dietary reconstructions of stem platyrrhines further indicate that a relatively high-quality diet was probably in place prior to increases in encephalization. Therefore, it is unlikely that a shift in diet quality was a primary constraint release for encephalization in platyrrhines and, by extrapolation, humans.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work comprises accurate computational analysis of levitated liquid droplet oscillations in AC and DC magnetic fields. The AC magnetic field interacting with the induced electric current within the liquid metal droplet generates intense fluid flow and the coupled free surface oscillations. The pseudo-spectral technique is used to solve the turbulent fluid flow equations for the continuously dynamically transformed axisymmetric fluid volume. The volume electromagnetic force distribution is updated with the shape and position change. We start with the ideal fluid test case for undamped Rayleigh frequency oscillations in the absence of gravity, and then add the viscous and the DC magnetic field damping. The oscillation frequency spectra are further analysed for droplets levitated against gravity in AC and DC magnetic fields at various combinations. In the extreme case electrically poorly conducting, diamagnetic droplet (water) levitation dynamics are simulated. Applications are aimed at pure electromagnetic material processing techniques and the material properties measurements in uncontaminated conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motion transparency provides a challenging test case for our understanding of how visual motion, and other attributes, are computed and represented in the brain. However, previous studies of visual transparency have used subjective criteria which do not confirm the existence of independent representations of the superimposed motions. We have developed measures of performance in motion transparency that require observers to extract information about two motions jointly, and therefore test the information that is simultaneously represented for each motion. Observers judged whether two motions were at 90 to one another; the base direction was randomized so that neither motion taken alone was informative. The precision of performance was determined by the standard deviations (S.D.s) of probit functions fitted to the data. Observers also made judgments of orthogonal directions between a single motion stream and a line, for one of two transparent motions against a line and for two spatially segregated motions. The data show that direction judgments with transparency can be made with comparable accuracy to segregated (non-transparent) conditions, supporting the idea that transparency involves the equivalent representation of two global motions in the same region. The precision of this joint direction judgment is, however, 2–3 times poorer than that for a single motion stream. The precision in directional judgment for a single stream is reduced only by a factor of about 1.5 by superimposing a second stream. The major effect in performance, therefore, appears to be associated with the need to compute and compare two global representations of motion, rather than with interference between the dot streams per se. Experiment 2tested the transparency of motions separated by a range of angles from 5 to 180 by requiring subjects to set a line matching the perceived direction of each motion. The S.D.s of these settings demonstrated that directions of transparent motions were represented independently for separations over 20. Increasing dot speeds from 1 to 10 deg/s improved directional performance but had no effect on transparency perception. Transparency was also unaffected by variations of density between 0.1 and 19 dots/deg2

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present the first detailed kinematical analysis of the planetary nebula Abell 63, which is known to contain the eclipsing close-binary nucleus UU Sge. Abell 63 provides an important test case in investigating the role of close-binary central stars on the evolution of planetary nebulae. Longslit observations were obtained using the Manchester echelle spectrometer combined with the 2.1-m San Pedro Martir Telescope. The spectra reveal that the central bright rim of Abell 63 has a tube-like structure. A deep image shows collimated lobes extending from the nebula, which are shown to be high-velocity outflows. The kinematic ages of the nebular rim and the extended lobes are calculated to be 8400 +/- 500 and 12900 +/- 2800 yr, respectively, which suggests that the lobes were formed at an earlier stage than the nebular rim. This is consistent with expectations that disc-generated jets form immediately after the common envelope phase. A morphological-kinematical model of the central nebula is presented and the best-fitting model is found to have the same inclination as the orbital plane of the central binary system; this is the first proof that a close-binary system directly affects the shaping of its nebula. A Hubble-type flow is well-established in the morphological-kinematical modelling of the observed line profiles and imagery. Two possible formation models for the elongated lobes of Abell 63 are considered, (i) a low-density, pressure-driven jet excavates a cavity in the remnant asymptotic giant branch (AGB) envelope; (ii) high-density bullets form the lobes in a single ballistic ejection event.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La realidad del voluntariado es sumamente compleja hasta el punto de que resulta complicado definir y caracterizar el trabajo voluntario, dada la gran variedad de interpretaciones, motivaciones, variables sociodemográficas y aspectos culturales que configuran el perfil de los voluntarios. El objetivo de este trabajo es analizar la influencia conjunta de algunas variables sociodemográficas, así como de los valores culturales de índole secular o tradicional, sobre el perfil de los voluntarios en Europa. Además, se investiga qué variables orientan a los voluntarios hacia un determinado tipo de voluntariado u otro. Para ello se ha aplicado principalmente una metodología de regresión logística a partir de la información disponible en la European Value Study. Los resultados obtenidos ayudan a establecer una caracterización del voluntariado en Europa, y confirman la influencia de los valores culturales, en primer lugar, en la realización o no de trabajos de voluntariado, y en segundo lugar, en la elección que hacen estas personas del tipo de actividad con la que están comprometidos. Al analizar dos tipos de voluntariado de motivación supuestamente muy diferente, se concluye que existe un grupo de valores que influyen en ambos, aunque el sentido y la intensidad en la que lo hacen sea diferente; por otra parte, algunos valores tienen influencia o no en la realización de trabajos de voluntariado, dependiendo del tipo específico al que nos refiramos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Delivering sufficient dose to tumours while sparing surrounding tissue is one of the primary challenges of radiotherapy, and in common practice this is typically achieved by using highly penetrating MV photon beams and spatially shaping dose. However, there has been a recent increase in interest in the possibility of using contrast agents with high atomic number to enhance the dose deposited in tumours when used in conjunction with kV x-rays, which see a significant increase in absorption due to the heavy element's high-photoelectric cross-section at such energies. Unfortunately, the introduction of such contrast agents significantly complicates the comparison of different source types for treatment efficacy, as the dose deposited now depends very strongly on the exact composition of the spectrum, making traditional metrics such as beam quality less valuable. To address this, a 'figure of merit' is proposed, which yields a value which enables the direct comparison of different source types for tumours at different depths inside a patient. This figure of merit is evaluated for a 15 MV LINAC source and two 150 kVp sources (both of which make use of a tungsten target, one with conventional aluminium filtration, while the other uses a more aggressive thorium filter) through analytical methods as well as numerical models, considering tissue treated with a realistic concentration and uptake ratio of gold nanoparticle contrast agents (10 mg ml(-1) concentration in 'tumour' volume, 10: 1 uptake ratio). Finally, a test case of human neck phantom is considered with a similar contrast agent to compare the abstract figure to a more realistic treatment situation. Good agreement was found both between the different approaches to calculate the figure of merit, and between the figure of merit and the effectiveness in a more realistic patient scenario. Together, these observations suggest that there is the potential for contrast-enhanced kilovoltage radiation to be a useful therapeutic tool for a number of classes of tumour on dosimetric considerations alone, and they point to the need for further research in this area.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ABSTRACT We present the first detailed spatiokinematical analysis and modelling of the planetary nebula Abell 41, which is known to contain the well-studied close-binary system MT Ser. This object represents an important test case in the study of the evolution of planetary nebulae with binary central stars as current evolutionary theories predict that the binary plane should be aligned perpendicular to the symmetry axis of the nebula. Deep narrow-band imaging in the light of [NII]6584Å, [OIII]5007 Å and [SII]6717+6731Å, obtained using ACAM on the William Herschel Telescope, has been used to investigate the ionization structure of Abell 41. Long-slit observations of the Ha and [NII]6584Å emission were obtained using the Manchester Echelle Spectrometer on the 2.1-m San Pedro Mártir Telescope. These spectra, combined with the narrow-band imagery, were used to develop a spatiokinematical model of [NII]6584Å emission from Abell 41. The best-fitting model reveals Abell 41 to have a waisted, bipolar structure with an expansion velocity of ~40 km s-1 at the waist. The symmetry axis of the model nebula is within 5° of perpendicular to the orbital plane of the central binary system. This provides strong evidence that the close-binary system, MT Ser, has directly affected the shaping of its nebula, Abell 41. Although the theoretical link between bipolar planetary nebulae and binary central stars is long established, this nebula is only the second to have this link, between nebular symmetry axis and binary plane, proved observationally.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes the use of the Euler equations for the generation and testing of tabular aerodynamic models for flight dynamics analysis. Maneuvers for the AGARD Standard Dynamics Model sharp leading-edge wind-tunnel geometry are considered as a test case. Wind-tunnel data is first used to validate the prediction of static and dynamic coefficients at both low and high angles, featuring complex vortical flow, with good agreement obtained at low to moderate angles of attack. Then the generation of aerodynamic tables is described based on a data fusion approach. Time-optimal maneuvers are generated based on these tables, including level flight trim, pull-ups at constant and varying incidence, and level and 90 degrees turns. The maneuver definition includes the aircraft states and also the control deflections to achieve the motion. The main point of the paper is then to assess the validity of the aerodynamic tables which were used to define the maneuvers. This is done by replaying them, including the control surface motions, through the time accurate computational fluid dynamics code. The resulting forces and moments are compared with the tabular values to assess the presence of inadequately modeled dynamic or unsteady effects. The agreement between the tables and the replay is demonstrated for slow maneuvers. Increasing rate maneuvers show discrepancies which are ascribed to vortical flow hysteresis at the higher rate motions. The framework is suitable for application to more complex viscous flow models, and is powerful for the assessment of the validity of aerodynamics models of the type currently used for studies of flight dynamics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper reports on work in developing a finite element (FE) based die shape optimisation for net-shape forging of 3D aerofoil blades for aeroengine applications. Quantitative representations of aerofoil forging tolerances were established to provide a correlation between conventional dimensional and shape specifications in forging production and those quantified in FE simulation. A new direct compensation method was proposed, employing variable weighting factors to minimise the total forging tolerances in forging optimisation computations. A surface approximation using a B-spline surface was also developed to ensure improved die surface quality for die shape representation and design. For a Ni-alloy blade test case, substantial reduction in dimensional and shape tolerances was achieved using the developed die shape optimisation system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Many deep-sea benthic animals occur in patchy distributions separated by thousands of kilometres, yet because deep-sea habitats are remote, little is known about their larval dispersal. Our novel method simulates dispersal by combining data from the Argo array of autonomous oceanographic probes, deep-sea ecological surveys, and comparative invertebrate physiology. The predicted particle tracks allow quantitative, testable predictions about the dispersal of benthic invertebrate larvae in the south-west Pacific. Principal Findings: In a test case presented here, using non-feeding, non-swimming (lecithotrophic trochophore) larvae of polyplacophoran molluscs (chitons), we show that the likely dispersal pathways in a single generation are significantly shorter than the distances between the three known population centres in our study region. The large-scale density of chiton populations throughout our study region is potentially much greater than present survey data suggest, with intermediate ‘stepping stone’ populations yet to be discovered. Conclusions/Significance: We present a new method that is broadly applicable to studies of the dispersal of deep-sea organisms. This test case demonstrates the power and potential applications of our new method, in generating quantitative, testable hypotheses at multiple levels to solve the mismatch between observed and expected distributions: probabilistic predictions of locations of intermediate populations, potential alternative dispersal mechanisms, and expected population genetic structure. The global Argo data have never previously been used to address benthic biology, and our method can be applied to any non-swimming larvae of the deep-sea, giving information upon dispersal corridors and population densities in habitats that remain intrinsically difficult to assess.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article, using Ireland where debt issues are of particular salience, as a test case, we seek to locate over-indebtedness and the severity of debt problems in the context of the broader economic circumstances of households. In doing so, we first identify an economically vulnerable segment of households and then explore the debt experience of vulnerable and non-vulnerable households. Our analysis reveals a striking contrast between the debt experiences of less than one in five households defined as economically vulnerable and all others. Financial exclusion, relating to access to a bank account and a credit card, was found to increase debt levels. However, such effects were modest. The impact of economic vulnerability seems to be largely a consequence of its relationship to a wide
range of socio-economic attributes and circumstances. The manner in which a potential debt crisis
unfolds will be shaped by the broader socio-economic structuring of life-chances. Any attempt to
respond to such problems by concentrating on financial exclusion or household behaviour or, indeed,
triggering factors without taking the wider social structuring of economic vulnerability is likely to be
both seriously misguided and largely ineffective.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Urban areas are pivotal to global adaptation and mitigation efforts. But how do cities actually perform in terms of climate change response? This study sheds light on the state of urban climate change adaptation and mitigation planning across Europe. Europe is an excellent test case given its advanced environmental policies and high urbanization. We performed a detailed analysis of 200 large and medium-sized cities across 11 European countries and analysed the cities' climate change adaptation and mitigation plans. We investigate the regional distribution of plans, adaptation and mitigation foci and the extent to which planned greenhouse gas (GHG) reductions contribute to national and international climate objectives. To our knowledge, it is the first study of its kind as it does not rely on self-assessment (questionnaires or social surveys). Our results show that 35 % of European cities studied have no dedicated mitigation plan and 72 % have no adaptation plan. No city has an adaptation plan without a mitigation plan. One quarter of the cities have both an adaptation and a mitigation plan and set quantitative GHG reduction targets, but those vary extensively in scope and ambition. Furthermore, we show that if the planned actions within cities are nationally representative the 11 countries investigated would achieve a 37 % reduction in GHG emissions by 2050, translating into a 27 % reduction in GHG emissions for the EU as a whole. However, the actions would often be insufficient to reach national targets and fall short of the 80 % reduction in GHG emissions recommended to avoid global mean temperature rising by 2 °C above pre-industrial levels. © 2013 Springer Science+Business Media Dordrecht.