90 resultados para Computational Geometry and Object Modelling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalizations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent paper published in this journal considers the numerical integration of the shallow-water equations using the leapfrog time-stepping scheme [Sun Wen-Yih, Sun Oliver MT. A modified leapfrog scheme for shallow water equations. Comput Fluids 2011;52:69–72]. The authors of that paper propose using the time-averaged height in the numerical calculation of the pressure-gradient force, instead of the instantaneous height at the middle time step. The authors show that this modification doubles the maximum Courant number (and hence the maximum time step) at which the integrations are stable, doubling the computational efficiency. Unfortunately, the pressure-averaging technique proposed by the authors is not original. It was devised and published by Shuman [5] and has been widely used in the atmosphere and ocean modelling community for over 40 years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In situ high resolution aircraft measurements of cloud microphysical properties were made in coordination with ground based remote sensing observations of a line of small cumulus clouds, using Radar and Lidar, as part of the Aerosol Properties, PRocesses And InfluenceS on the Earth's climate (APPRAISE) project. A narrow but extensive line (~100 km long) of shallow convective clouds over the southern UK was studied. Cloud top temperatures were observed to be higher than −8 °C, but the clouds were seen to consist of supercooled droplets and varying concentrations of ice particles. No ice particles were observed to be falling into the cloud tops from above. Current parameterisations of ice nuclei (IN) numbers predict too few particles will be active as ice nuclei to account for ice particle concentrations at the observed, near cloud top, temperatures (−7.5 °C). The role of mineral dust particles, consistent with concentrations observed near the surface, acting as high temperature IN is considered important in this case. It was found that very high concentrations of ice particles (up to 100 L−1) could be produced by secondary ice particle production providing the observed small amount of primary ice (about 0.01 L−1) was present to initiate it. This emphasises the need to understand primary ice formation in slightly supercooled clouds. It is shown using simple calculations that the Hallett-Mossop process (HM) is the likely source of the secondary ice. Model simulations of the case study were performed with the Aerosol Cloud and Precipitation Interactions Model (ACPIM). These parcel model investigations confirmed the HM process to be a very important mechanism for producing the observed high ice concentrations. A key step in generating the high concentrations was the process of collision and coalescence of rain drops, which once formed fell rapidly through the cloud, collecting ice particles which caused them to freeze and form instant large riming particles. The broadening of the droplet size-distribution by collision-coalescence was, therefore, a vital step in this process as this was required to generate the large number of ice crystals observed in the time available. Simulations were also performed with the WRF (Weather, Research and Forecasting) model. The results showed that while HM does act to increase the mass and number concentration of ice particles in these model simulations it was not found to be critical for the formation of precipitation. However, the WRF simulations produced a cloud top that was too cold and this, combined with the assumption of continual replenishing of ice nuclei removed by ice crystal formation, resulted in too many ice crystals forming by primary nucleation compared to the observations and parcel modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At the end of the 20th century, we can look back on a spectacular development of numerical weather prediction, which has, practically uninterrupted, been going on since the middle of the century. High-resolution predictions for more than a week ahead for any part of the globe are now routinely produced and anyone with an Internet connection can access many of these forecasts for anywhere in the world. Extended predictions for several seasons ahead are also being done — the latest El Niño event in 1997/1998 is an example of such a successful prediction. The great achievement is due to a number of factors including the progress in computational technology and the establishment of global observing systems, combined with a systematic research program with an overall strategy towards building comprehensive prediction systems for climate and weather. In this article, I will discuss the different evolutionary steps in this development and the way new scientific ideas have contributed to efficiently explore the computing power and in using observations from new types of observing systems. Weather prediction is not an exact science due to unavoidable errors in initial data and in the models. To quantify the reliability of a forecast is therefore essential and probably more so the longer the forecasts are. Ensemble prediction is thus a new and important concept in weather and climate prediction, which I believe will become a routine aspect of weather prediction in the future. The limit between weather and climate prediction is becoming more and more diffuse and in the final part of this article I will outline the way I think development may proceed in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper will introduce the Baltex research programme and summarize associated numerical modelling work which has been undertaken during the last five years. The research has broadly managed to clarify the main mechanisms determining the water and energy cycle in the Baltic region, such as the strong dependence upon the large scale atmospheric circulation. It has further been shown that the Baltic Sea has a positive water balance, albeit with large interannual variations. The focus on the modelling studies has been the use of limited area models at ultra-high resolution driven by boundary conditions from global models or from reanalysis data sets. The programme has further initiated a comprehensive integration of atmospheric, land surface and hydrological modelling incorporating snow, sea ice and special lake models. Other aspects of the programme include process studies such as the role of deep convection, air sea interaction and the handling of land surface moisture. Studies have also been undertaken to investigate synoptic and sub-synoptic events over the Baltic region, thus exploring the role of transient weather systems for the hydrological cycle. A special aspect has been the strong interests and commitments of the meteorological and hydrological services because of the potentially large societal interests of operational applications of the research. As a result of this interests special attention has been put on data-assimilation aspects and the use of new types of data such as SSM/I, GPS-measurements and digital radar. A series of high resolution data sets are being produced. One of those, a 1/6 degree daily precipitation climatology for the years 1996–1999, is such a unique contribution. The specific research achievements to be presented in this volume of Meteorology and Atmospheric Physics is the result of a cooperative venture between 11 European research groups supported under the EU-Framework programmes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Maritime Continent archipelago, situated on the equator at 95-165E, has the strongest land-based precipitation on Earth. The latent heat release associated with the rainfall affects the atmospheric circulation throughout the tropics and into the extra-tropics. The greatest source of variability in precipitation is the diurnal cycle. The archipelago is within the convective region of the Madden-Julian Oscillation (MJO), which provides the greatest variability on intra-seasonal time scales: large-scale (∼10^7 km^2) active and suppressed convective envelopes propagate slowly (∼5 m s^-1) eastwards between the Indian and Pacific Oceans. High-resolution satellite data show that a strong diurnal cycle is triggered to the east of the advancing MJO envelope, leading the active MJO by one-eighth of an MJO cycle (∼6 days). Where the diurnal cycle is strong its modulation accounts for 81% of the variability in MJO precipitation. Over land this determines the structure of the diagnosed MJO. This is consistent with the equatorial wave dynamics in existing theories of MJO propagation. The MJO also affects the speed of gravity waves propagating offshore from the Maritime Continent islands. This is largely consistent with changes in static stability during the MJO cycle. The MJO and its interaction with the diurnal cycle are investigated in HiGEM, a high-resolution coupled model. Unlike many models, HiGEM represents the MJO well with eastward-propagating variability on intra-seasonal time scales at the correct zonal wavenumber, although the inter-tropical convergence zone's precipitation peaks strongly at the wrong time, interrupting the MJO's spatial structure. However, the modelled diurnal cycle is too weak and its phase is too early over land. The modulation of the diurnal amplitude by the MJO is also too weak and accounts for only 51% of the variability in MJO precipitation. Implications for forecasting and possible causes of the model errors are discussed, and further modelling studies are proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the challenge of representing structural differences in river channel cross-section geometry for regional to global scale river hydraulic models and the effect this can have on simulations of wave dynamics. Classically, channel geometry is defined using data, yet at larger scales the necessary information and model structures do not exist to take this approach. We therefore propose a fundamentally different approach where the structural uncertainty in channel geometry is represented using a simple parameterization, which could then be estimated through calibration or data assimilation. This paper first outlines the development of a computationally efficient numerical scheme to represent generalised channel shapes using a single parameter, which is then validated using a simple straight channel test case and shown to predict wetted perimeter to within 2% for the channels tested. An application to the River Severn, UK is also presented, along with an analysis of model sensitivity to channel shape, depth and friction. The channel shape parameter was shown to improve model simulations of river level, particularly for more physically plausible channel roughness and depth parameter ranges. Calibrating channel Manning’s coefficient in a rectangular channel provided similar water level simulation accuracy in terms of Nash-Sutcliffe efficiency to a model where friction and shape or depth were calibrated. However, the calibrated Manning coefficient in the rectangular channel model was ~2/3 greater than the likely physically realistic value for this reach and this erroneously slowed wave propagation times through the reach by several hours. Therefore, for large scale models applied in data sparse areas, calibrating channel depth and/or shape may be preferable to assuming a rectangular geometry and calibrating friction alone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports the current state of work to simplify our previous model-based methods for visual tracking of vehicles for use in a real-time system intended to provide continuous monitoring and classification of traffic from a fixed camera on a busy multi-lane motorway. The main constraints of the system design were: (i) all low level processing to be carried out by low-cost auxiliary hardware, (ii) all 3-D reasoning to be carried out automatically off-line, at set-up time. The system developed uses three main stages: (i) pose and model hypothesis using 1-D templates, (ii) hypothesis tracking, and (iii) hypothesis verification, using 2-D templates. Stages (i) & (iii) have radically different computing performance and computational costs, and need to be carefully balanced for efficiency. Together, they provide an effective way to locate, track and classify vehicles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During deglaciation of the North American Laurentide Ice Sheet large proglacial lakes developed in positions where proglacial drainage was impeded by the ice margin. For some of these lakes, it is known that subsequent drainage had an abrupt and widespread impact on North Atlantic Ocean circulation and climate, but less is known about the impact that the lakes exerted on ice sheet dynamics. This paper reports palaeogeographic reconstructions of the evolution of proglacial lakes during deglaciation across the northwestern Canadian Shield, covering an area in excess of 1,000,000 km(2) as the ice sheet retreated some 600 km. The interactions between proglacial lakes and ice sheet flow are explored, with a particular emphasis on whether the disposition of lakes may have influenced the location of the Dubawnt Lake ice stream. This ice stream falls outside the existing paradigm for ice streams in the Laurentide Ice Sheet because it did not operate over fined-grained till or lie in a topographic trough. Ice margin positions and a digital elevation model are utilised to predict the geometry and depth of proglacial takes impounded at the margin at 30-km increments during deglaciation. Palaeogeographic reconstructions match well with previous independent estimates of lake coverage inferred from field evidence, and results suggest that the development of a deep lake in the Thelon drainage basin may have been influential in initiating the ice stream by inducing calving, drawing down ice and triggering fast ice flow. This is the only location alongside this sector of the ice sheet where large (>3000 km(2)), deep lakes (similar to120 m) are impounded for a significant length of time and exactly matches the location of the ice stream. It is speculated that the commencement of calving at the ice sheet margin may have taken the system beyond a threshold and was sufficient to trigger rapid motion but that once initiated, calving processes and losses were insignificant to the functioning of the ice stream. It is thus concluded that proglacial lakes are likely to have been an important control on ice sheet dynamics during deglaciation of the Laurentide Ice Sheet. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems Engineering often involves computer modelling the behaviour of proposed systems and their components. Where a component is human, fallibility must be modelled by a stochastic agent. The identification of a model of decision-making over quantifiable options is investigated using the game-domain of Chess. Bayesian methods are used to infer the distribution of players’ skill levels from the moves they play rather than from their competitive results. The approach is used on large sets of games by players across a broad FIDE Elo range, and is in principle applicable to any scenario where high-value decisions are being made under pressure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

More data will be produced in the next five years than in the entire history of human kind, a digital deluge that marks the beginning of the Century of Information. Through a year-long consultation with UK researchers, a coherent strategy has been developed, which will nurture Century-of-Information Research (CIR); it crystallises the ideas developed by the e-Science Directors' Forum Strategy Working Group. This paper is an abridged version of their latest report which can be found at: http://wikis.nesc.ac.uk/escienvoy/Century_of_Information_Research_Strategy which also records the consultation process and the affiliations of the authors. This document is derived from a paper presented at the Oxford e-Research Conference 2008 and takes into account suggestions made in the ensuing panel discussion. The goals of the CIR Strategy are to facilitate the growth of UK research and innovation that is data and computationally intensive and to develop a new culture of 'digital-systems judgement' that will equip research communities, businesses, government and society as a whole, with the skills essential to compete and prosper in the Century of Information. The CIR Strategy identifies a national requirement for a balanced programme of coordination, research, infrastructure, translational investment and education to empower UK researchers, industry, government and society. The Strategy is designed to deliver an environment which meets the needs of UK researchers so that they can respond agilely to challenges, can create knowledge and skills, and can lead new kinds of research. It is a call to action for those engaged in research, those providing data and computational facilities, those governing research and those shaping education policies. The ultimate aim is to help researchers strengthen the international competitiveness of the UK research base and increase its contribution to the economy. The objectives of the Strategy are to better enable UK researchers across all disciplines to contribute world-leading fundamental research; to accelerate the translation of research into practice; and to develop improved capabilities, facilities and context for research and innovation. It envisages a culture that is better able to grasp the opportunities provided by the growing wealth of digital information. Computing has, of course, already become a fundamental tool in all research disciplines. The UK e-Science programme (2001-06)—since emulated internationally—pioneered the invention and use of new research methods, and a new wave of innovations in digital-information technologies which have enabled them. The Strategy argues that the UK must now harness and leverage its own, plus the now global, investment in digital-information technology in order to spread the benefits as widely as possible in research, education, industry and government. Implementing the Strategy would deliver the computational infrastructure and its benefits as envisaged in the Science & Innovation Investment Framework 2004-2014 (July 2004), and in the reports developing those proposals. To achieve this, the Strategy proposes the following actions: support the continuous innovation of digital-information research methods; provide easily used, pervasive and sustained e-Infrastructure for all research; enlarge the productive research community which exploits the new methods efficiently; generate capacity, propagate knowledge and develop skills via new curricula; and develop coordination mechanisms to improve the opportunities for interdisciplinary research and to make digital-infrastructure provision more cost effective. To gain the best value for money strategic coordination is required across a broad spectrum of stakeholders. A coherent strategy is essential in order to establish and sustain the UK as an international leader of well-curated national data assets and computational infrastructure, which is expertly used to shape policy, support decisions, empower researchers and to roll out the results to the wider benefit of society. The value of data as a foundation for wellbeing and a sustainable society must be appreciated; national resources must be more wisely directed to the collection, curation, discovery, widening access, analysis and exploitation of these data. Every researcher must be able to draw on skills, tools and computational resources to develop insights, test hypotheses and translate inventions into productive use, or to extract knowledge in support of governmental decision making. This foundation plus the skills developed will launch significant advances in research, in business, in professional practice and in government with many consequent benefits for UK citizens. The Strategy presented here addresses these complex and interlocking requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analytic functions have been obtained to represent the potential energy surfaces of C3 and HCN in their ground electronic states. These functions closely reproduce the available data on the energy, geometry, and force constants in all stable conformations, as well as data on the various dissociation products, and ab initio calculations of the energy at other conformations. The form of the resulting surfaces are portrayed in various ways and discussed briefly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consumers' attitudes to trust and risk are key issues in food safety research and attention needs to be focused on clearly defining a framework for analysing consumer behaviour in these terms. In order to achieve this, a detailed review of the recent literature surrounding risk, trust and the relationship between the two must be conducted. This paper aims to collate the current social sciences literature in the fields of food safety, trust and risk. It provides an insight into the economic and other modelling procedures available to measure consumers' attitudes to risk and trust in food safety and specifically notes the need for future research to concentrate on examining risk and trust as inter-related variables rather than two distinct, mutually exclusive concepts. A framework is proposed which it is hoped will assist in devising more effective research to support risk communication to consumers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three new metal-organic polymeric complexes, [Fe(N-3)(2)(bPP)(2)] (1), [Fe(N-3)(2)(bpe)] (2), and [Fe(N-3)(2)(phen)] (3) [bpp = (1,3-bis(4-pyridyl)-propane), bpe = (1,2-bis(4-pyridyl)-ethane), phen = 1,10-phenanthroline], have been synthesized and characterized by single-crystal X-ray diffraction studies and low-temperature magnetic measurements in the range 300-2 K. Complexes 1 and 2 crystallize in the monoclinic system, space group C2/c, with the following cell parameters: a = 19.355(4) Angstrom, b = 7.076(2) Angstrom, c = 22.549(4) Angstrom, beta = 119.50(3)degrees, Z = 4, and a = 10.007(14) Angstrom, b = 13.789(18) Angstrom, c = 10.377(14) Angstrom, beta = 103.50(1)degrees, Z = 4, respectively. Complex 3 crystallizes in the triclinic system, space group P (1) over bar, with a = 7.155(12) Angstrom, b = 10.066(14) Angstrom, c = 10.508(14) Angstrom, alpha = 109.57(1)degrees, beta = 104.57(1)degrees, gamma = 105.10(1)degrees, and Z = 2. All coordination polymers exhibit octahedral Fe(II) nodes. The structural determination of 1 reveals a parallel interpenetrated structure of 2D layers of (4,4) topology, formed by Fe(II) nodes linked through bpp ligands, while mono-coordinated azide anions are pendant from the corrugated sheet. Complex 2 has a 2D arrangement constructed through 1D double end-to-end azide bridged iron(11) chains interconnected through bpe ligands. Complex 3 shows a polymeric arrangement where the metal ions are interlinked through pairs of end-on and end-to-end azide ligands exhibiting a zigzag arrangement of metals (Fe-Fe-Fe angle of 111.18degrees) and an intermetallic separation of 3.347 Angstrom (through the EO azide) and of 5.229 Angstrom (EE azide). Variable-temperature magnetic susceptibility data suggest that there is no magnetic interaction between the metal centers in 1, whereas in 2 there is an antiferromagnetic interaction through the end-to-end azide bridge. Complex 3 shows ferro- as well as anti-ferromagnetic interactions between the metal centers generated through the alternating end-on and end-to-end azide bridges. Complex I has been modeled using the D parameter (considering distorted octahedral Fe(II) geometry and with any possible J value equal to zero) and complex 2 has been modeled as a one-dimensional system with classical and/or quantum spin where we have used two possible full diagonalization processes: without and with the D parameter, considering the important distortions of the Fe(II) ions. For complex 3, the alternating coupling model impedes a mathematical solution for the modeling as classical spins. With quantum spin, the modeling has been made as in 2.