951 resultados para Distributed space-time code


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Three comprehensive one-dimensional simulators were used on the same PC to simulate the dynamics of different electrophoretic configurations, including two migrating hybrid boundaries, an isotachophoretic boundary and the zone electrophoretic separation of ten monovalent anions. Two simulators, SIMUL5 and GENTRANS, use a uniform grid, while SPRESSO uses a dynamic adaptive grid. The simulators differ in the way components are handled. SIMUL5 and SPRESSO feature one equation for all components, whereas GENTRANS is based on the use of separate modules for the different types of monovalent components, a module for multivalent components and a module for proteins. The code for multivalent components is executed more slowly compared to those for monovalent components. Furthermore, with SIMUL5, the computational time interval becomes smaller when it is operated with a reduced calculation space that features moving borders, whereas GENTRANS offers the possibility of using data smoothing (removal of negative concentrations), which can avoid numerical oscillations and speed up a simulation. SPRESSO with its adaptive grid could be employed to simulate the same configurations with smaller numbers of grid points and thus is faster in certain but not all cases. The data reveal that simulations featuring a large number of monovalent components distributed such that a high mesh is required throughout a large proportion of the column are fastest executed with GENTRANS.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Numerous time series studies have provided strong evidence of an association between increased levels of ambient air pollution and increased levels of hospital admissions, typically at 0, 1, or 2 days after an air pollution episode. An important research aim is to extend existing statistical models so that a more detailed understanding of the time course of hospitalization after exposure to air pollution can be obtained. Information about this time course, combined with prior knowledge about biological mechanisms, could provide the basis for hypotheses concerning the mechanism by which air pollution causes disease. Previous studies have identified two important methodological questions: (1) How can we estimate the shape of the distributed lag between increased air pollution exposure and increased mortality or morbidity? and (2) How should we estimate the cumulative population health risk from short-term exposure to air pollution? Distributed lag models are appropriate tools for estimating air pollution health effects that may be spread over several days. However, estimation for distributed lag models in air pollution and health applications is hampered by the substantial noise in the data and the inherently weak signal that is the target of investigation. We introduce an hierarchical Bayesian distributed lag model that incorporates prior information about the time course of pollution effects and combines information across multiple locations. The model has a connection to penalized spline smoothing using a special type of penalty matrix. We apply the model to estimating the distributed lag between exposure to particulate matter air pollution and hospitalization for cardiovascular and respiratory disease using data from a large United States air pollution and hospitalization database of Medicare enrollees in 94 counties covering the years 1999-2002.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Although we have amassed extensive catalogues of signalling network components, our understanding of the spatiotemporal control of emergent network structures has lagged behind. Dynamic behaviour is starting to be explored throughout the genome, but analysis of spatial behaviours is still confined to individual proteins. The challenge is to reveal how cells integrate temporal and spatial information to determine specific biological functions. Key findings are the discovery of molecular signalling machines such as Ras nanoclusters, spatial activity gradients and flexible network circuitries that involve transcriptional feedback. They reveal design principles of spatiotemporal organization that are crucial for network function and cell fate decisions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Key performance features of a miniature laser ablation time-of-flight mass spectrometer designed for in situ investigations of the chemical composition of planetary surfaces are presented. This mass spectrometer is well suited for elemental and isotopic analysis of raw solid materials with high sensitivity and high spatial resolution. In this study, ultraviolet laser radiation with irradiances suitable for ablation (< 1 GW/cm2) is used to achieve stable ion formation and low sample consumption. In comparison to our previous laser ablation studies at infrared wavelengths, several improvements to the experimental setup have been made, which allow accurate control over the experimental conditions and good reproducibility of measurements. Current performance evaluations indicate significant improvements to several instrumental figures of merit. Calibration of the mass scale is performed within a mass accuracy (Δm/m) in the range of 100 ppm, and a typical mass resolution (m/Δm) ~600 is achieved at the lead mass peaks. At lower laser irradiances, the mass resolution is better, about (m/Δm) ~900 for lead, and limited by the laser pulse duration of 3 ns. The effective dynamic range of the instrument was enhanced from about 6 decades determined in previous study up to more than 8 decades at present. Current studies show high sensitivity in detection of both metallic and non-metallic elements. Their abundance down to tens of ppb can be measured together with their isotopic patterns. Due to strict control of the experimental parameters, e.g. laser characteristics, ion-optical parameters and sample position, by computer control, measurements can be performed with high reproducibility. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Digital technologies have profoundly changed not only the ways we create, distribute, access, use and re-use information but also many of the governance structures we had in place. Overall, "older" institutions at all governance levels have grappled and often failed to master the multi-faceted and multi-directional issues of the Internet. Regulatory entrepreneurs have yet to discover and fully mobilize the potential of digital technologies as an influential factor impacting upon the regulability of the environment and as a potential regulatory tool in themselves. At the same time, we have seen a deterioration of some public spaces and lower prioritization of public objectives, when strong private commercial interests are at play, such as most tellingly in the field of copyright. Less tangibly, private ordering has taken hold and captured through contracts spaces, previously regulated by public law. Code embedded in technology often replaces law. Non-state action has in general proliferated and put serious pressure upon conventional state-centered, command-and-control models. Under the conditions of this "messy" governance, the provision of key public goods, such as freedom of information, has been made difficult or is indeed jeopardized.The grand question is how can we navigate this complex multi-actor, multi-issue space and secure the attainment of fundamental public interest objectives. This is also the question that Ian Brown and Chris Marsden seek to answer with their book, Regulating Code, as recently published under the "Information Revolution and Global Politics" series of MIT Press. This book review critically assesses the bold effort by Brown and Marsden.