873 resultados para Multi-scale modeling
Resumo:
Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.
Resumo:
The large scale fading of wireless mobile communications links is modelled assuming the mobile receiver motion is described by a dynamic linear system in state-space. The geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A Wiener system subspace identification algorithm in conjunction with polynomial regression is used to identify a model from time-domain estimates of the field intensity assuming a multitude of emitters and an antenna array at the receiver end.
Resumo:
In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi-threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full-scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread-safe Java messaging system—MPJ Express. The first application is the Gadget-2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite-domain time-difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator, and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
We present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB). The model is based on the PRA framework of gas-particle interactions (Poschl-Rudich-Ammann, 2007), and it includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory studies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (carbon-carbon double bonds) can reach chemical lifetimes of many hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (< 10(-10) cm(2) s(-1)). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models.
Resumo:
We present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB). The model is based on the PRA framework of gas–particle interactions (P¨oschl et al., 5 2007), and it includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface 10 concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory stud15 ies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial transport and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (C=C double bonds) can reach chemical lifetimes of 20 multiple hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (10−10 cm2 s−1). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB 25 as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models.
Resumo:
The consistency of precipitation variability estimated from the multiple satellite-based observing systems is assessed. There is generally good agreement between TRMM TMI, SSM/I, GPCP and AMSRE datasets for the inter-annual variability of precipitation since 1997 but the HOAPS dataset appears to overestimate the magnitude of variability. Over the tropical ocean the TRMM 3B42 dataset produces unrealistic variabilitys. Based upon deseasonalised GPCP data for the period 1998-2008, the sensitivity of global mean precipitation (P) to surface temperature (T) changes (dP/dT) is about 6%/K, although a smaller sensitivity of 3.6%/K is found using monthly GPCP data over the longer period 1989-2008. Over the tropical oceans dP/dT ranges from 10-30%/K depending upon time-period and dataset while over tropical land dP/dT is -8 to -11%/K for the 1998-2008 period. Analyzing the response of the tropical ocean precipitation intensity distribution to changes in T we find the wetter area P shows a strong positive response to T of around 20%/K. The response over the drier tropical regimes is less coherent and varies with datasets, but responses over the tropical land show significant negative relationships over an interannual time-scale. The spatial and temporal resolutions of the datasets strongly influence the precipitation responses over the tropical oceans and help explain some of the discrepancy between different datasets. Consistency between datasets is found to increase on averaging from daily to 5-day time-scales and considering a 1o (or coarser) spatial resolution. Defining the wet and dry tropical ocean regime by the 60th percentile of P intensity, the 5-day average, 1o TMI data exhibits a coherent drying of the dry regime at the rate of -20%/K and the wet regime becomes wetter at a similar rate with warming.
Resumo:
Monitoring Earth's terrestrial water conditions is critically important to many hydrological applications such as global food production; assessing water resources sustainability; and flood, drought, and climate change prediction. These needs have motivated the development of pilot monitoring and prediction systems for terrestrial hydrologic and vegetative states, but to date only at the rather coarse spatial resolutions (∼10–100 km) over continental to global domains. Adequately addressing critical water cycle science questions and applications requires systems that are implemented globally at much higher resolutions, on the order of 1 km, resolutions referred to as hyperresolution in the context of global land surface models. This opinion paper sets forth the needs and benefits for a system that would monitor and predict the Earth's terrestrial water, energy, and biogeochemical cycles. We discuss six major challenges in developing a system: improved representation of surface‐subsurface interactions due to fine‐scale topography and vegetation; improved representation of land‐atmospheric interactions and resulting spatial information on soil moisture and evapotranspiration; inclusion of water quality as part of the biogeochemical cycle; representation of human impacts from water management; utilizing massively parallel computer systems and recent computational advances in solving hyperresolution models that will have up to 109 unknowns; and developing the required in situ and remote sensing global data sets. We deem the development of a global hyperresolution model for monitoring the terrestrial water, energy, and biogeochemical cycles a “grand challenge” to the community, and we call upon the international hydrologic community and the hydrological science support infrastructure to endorse the effort.
Resumo:
In order to harness the computational capacity of dissociated cultured neuronal networks, it is necessary to understand neuronal dynamics and connectivity on a mesoscopic scale. To this end, this paper uncovers dynamic spatiotemporal patterns emerging from electrically stimulated neuronal cultures using hidden Markov models (HMMs) to characterize multi-channel spike trains as a progression of patterns of underlying states of neuronal activity. However, experimentation aimed at optimal choice of parameters for such models is essential and results are reported in detail. Results derived from ensemble neuronal data revealed highly repeatable patterns of state transitions in the order of milliseconds in response to probing stimuli.
Resumo:
We consider two weakly coupled systems and adopt a perturbative approach based on the Ruelle response theory to study their interaction. We propose a systematic way of parameterizing the effect of the coupling as a function of only the variables of a system of interest. Our focus is on describing the impacts of the coupling on the long term statistics rather than on the finite-time behavior. By direct calculation, we find that, at first order, the coupling can be surrogated by adding a deterministic perturbation to the autonomous dynamics of the system of interest. At second order, there are additionally two separate and very different contributions. One is a term taking into account the second-order contributions of the fluctuations in the coupling, which can be parameterized as a stochastic forcing with given spectral properties. The other one is a memory term, coupling the system of interest to its previous history, through the correlations of the second system. If these correlations are known, this effect can be implemented as a perturbation with memory on the single system. In order to treat this case, we present an extension to Ruelle's response theory able to deal with integral operators. We discuss our results in the context of other methods previously proposed for disentangling the dynamics of two coupled systems. We emphasize that our results do not rely on assuming a time scale separation, and, if such a separation exists, can be used equally well to study the statistics of the slow variables and that of the fast variables. By recursively applying the technique proposed here, we can treat the general case of multi-level systems.
Resumo:
Northern Hemisphere tropical cyclone (TC) activity is investigated in multiyear global climate simulations with theECMWFIntegrated Forecast System (IFS) at 10-km resolution forced by the observed records of sea surface temperature and sea ice. The results are compared to analogous simulationswith the 16-, 39-, and 125-km versions of the model as well as observations. In the North Atlantic, mean TC frequency in the 10-km model is comparable to the observed frequency, whereas it is too low in the other versions. While spatial distributions of the genesis and track densities improve systematically with increasing resolution, the 10-km model displays qualitatively more realistic simulation of the track density in the western subtropical North Atlantic. In the North Pacific, the TC count tends to be too high in thewest and too low in the east for all resolutions. These model errors appear to be associated with the errors in the large-scale environmental conditions that are fairly similar in this region for all model versions. The largest benefits of the 10-km simulation are the dramatically more accurate representation of the TC intensity distribution and the structure of the most intense storms. The model can generate a supertyphoon with a maximum surface wind speed of 68.4 m s21. The life cycle of an intense TC comprises intensity fluctuations that occur in apparent connection with the variations of the eyewall/rainband structure. These findings suggest that a hydrostatic model with cumulus parameterization and of high enough resolution could be efficiently used to simulate the TC intensity response (and the associated structural changes) to future climate change.
Resumo:
This paper proposes and demonstrates an approach, Skilloscopy, to the assessment of decision makers. In an increasingly sophisticated, connected and information-rich world, decision making is becoming both more important and more difficult. At the same time, modelling decision-making on computers is becoming more feasible and of interest, partly because the information-input to those decisions is increasingly on record. The aims of Skilloscopy are to rate and rank decision makers in a domain relative to each other: the aims do not include an analysis of why a decision is wrong or suboptimal, nor the modelling of the underlying cognitive process of making the decisions. In the proposed method a decision-maker is characterised by a probability distribution of their competence in choosing among quantifiable alternatives. This probability distribution is derived by classic Bayesian inference from a combination of prior belief and the evidence of the decisions. Thus, decision-makers’ skills may be better compared, rated and ranked. The proposed method is applied and evaluated in the gamedomain of Chess. A large set of games by players across a broad range of the World Chess Federation (FIDE) Elo ratings has been used to infer the distribution of players’ rating directly from the moves they play rather than from game outcomes. Demonstration applications address questions frequently asked by the Chess community regarding the stability of the Elo rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The method of Skilloscopy may be applied in any decision domain where the value of the decision-options can be quantified.
Resumo:
In this paper, we propose a novel online modeling algorithm for nonlinear and nonstationary systems using a radial basis function (RBF) neural network with a fixed number of hidden nodes. Each of the RBF basis functions has a tunable center vector and an adjustable diagonal covariance matrix. A multi-innovation recursive least square (MRLS) algorithm is applied to update the weights of RBF online, while the modeling performance is monitored. When the modeling residual of the RBF network becomes large in spite of the weight adaptation, a node identified as insignificant is replaced with a new node, for which the tunable center vector and diagonal covariance matrix are optimized using the quantum particle swarm optimization (QPSO) algorithm. The major contribution is to combine the MRLS weight adaptation and QPSO node structure optimization in an innovative way so that it can track well the local characteristic in the nonstationary system with a very sparse model. Simulation results show that the proposed algorithm has significantly better performance than existing approaches.
Identifying time lags in the restoration of grassland butterfly communities: a multi-site assessment
Resumo:
Although grasslands are crucial habitats for European butterflies, large-scale declines in quality and area have devastated many species. Grassland restoration can contribute to the recovery of butterfly populations, although there is a paucity of information on the long-term effects of management. Using eight UK data sets (9-21 years), we investigate changes in restoration success for (1) arable reversion sites, were grassland was established on bare ground using seed mixtures, and (2) grassland enhancement sites, where degraded grasslands are restored by scrub removal followed by the re-instigation of cutting/grazing. We also assessed the importance of individual butterfly traits and ecological characteristics in determining colonisation times. Consistent increases in restoration success over time were seen for arable reversion sites, with the most rapid rates of increase in restoration success seen over the first 10 years. For grasslands enhancement there were no consistent increases in restoration success over time. Butterfly colonisation times were fastest for species with widespread host plants or where host plants established well during restoration. Low mobility butterfly species took longer to colonise. We show that arable reversion is an effective tool for the management of butterfly communities. We suggest that as restoration takes time to achieve, its use as a mitigation tool against future environmental change (i.e. by decreasing isolation in fragmented landscapes) needs to take into account such time lags.
Resumo:
Whilst hydrological systems can show resilience to short-term streamflow deficiencies during within-year droughts, prolonged deficits during multi-year droughts are a significant threat to water resources security in Europe. This study uses a threshold-based objective classification of regional hydrological drought to qualitatively examine the characteristics, spatio-temporal evolution and synoptic climatic drivers of multi-year drought events in 1962–64, 1975–76 and 1995–97, on a European scale but with particular focus on the UK. Whilst all three events are multi-year, pan-European phenomena, their development and causes can be contrasted. The critical factor in explaining the unprecedented severity of the 1975–76 event is the consecutive occurrence of winter and summer drought. In contrast, 1962–64 was a succession of dry winters, mitigated by quiescent summers, whilst 1995–97 lacked spatial coherence and was interrupted by wet interludes. Synoptic climatic conditions vary within and between multi-year droughts, suggesting that regional factors modulate the climate signal in streamflow drought occurrence. Despite being underpinned by qualitatively similar climatic conditions and commonalities in evolution and characteristics, each of the three droughts has a unique spatio-temporal signature. An improved understanding of the spatio-temporal evolution and characteristics of multi-year droughts has much to contribute to monitoring and forecasting capability, and to improved mitigation strategies.