31 resultados para random number generation
Resumo:
Objective: To assess the effectiveness of absolute risk, relative risk, and number needed to harm formats for medicine side effects, with and without the provision of baseline risk information. Methods: A two factor, risk increase format (relative, absolute and NNH) x baseline (present/absent) between participants design was used. A sample of 268 women was given a scenario about increase in side effect risk with third generation oral contraceptives, and were required to answer written questions to assess their understanding, satisfaction, and likelihood of continuing to take the drug. Results: Provision of baseline information significantly improved risk estimates and increased satisfaction, although the estimates were still considerably higher than the actual risk. No differences between presentation formats were observed when baseline information was presented. Without baseline information, absolute risk led to the most accurate performance. Conclusion: The findings support the importance of informing people about baseline level of risk when describing risk increases. In contrast, they offer no support for using number needed to harm. Practice implications: Health professionals should provide baseline risk information when presenting information about risk increases or decreases. More research is needed before numbers needed to harm (or treat) should be given to members of the general populations. (c) 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.
Resumo:
An important goal in computational neuroanatomy is the complete and accurate simulation of neuronal morphology. We are developing computational tools to model three-dimensional dendritic structures based on sets of stochastic rules. This paper reports an extensive, quantitative anatomical characterization of simulated motoneurons and Purkinje cells. We used several local and global algorithms implemented in the L-Neuron and ArborVitae programs to generate sets of virtual neurons. Parameters statistics for all algorithms were measured from experimental data, thus providing a compact and consistent description of these morphological classes. We compared the emergent anatomical features of each group of virtual neurons with those of the experimental database in order to gain insights on the plausibility of the model assumptions, potential improvements to the algorithms, and non-trivial relations among morphological parameters. Algorithms mainly based on local constraints (e.g., branch diameter) were successful in reproducing many morphological properties of both motoneurons and Purkinje cells (e.g. total length, asymmetry, number of bifurcations). The addition of global constraints (e.g., trophic factors) improved the angle-dependent emergent characteristics (average Euclidean distance from the soma to the dendritic terminations, dendritic spread). Virtual neurons systematically displayed greater anatomical variability than real cells, suggesting the need for additional constraints in the models. For several emergent anatomical properties, a specific algorithm reproduced the experimental statistics better than the others did. However, relative performances were often reversed for different anatomical properties and/or morphological classes. Thus, combining the strengths of alternative generative models could lead to comprehensive algorithms for the complete and accurate simulation of dendritic morphology.
Resumo:
The crystallization behaviour of a series of random copolymers of varying chemical composition is reported. For polymers containing a high proportion of alternating rigid aromatic units and flexible spacers, conventional liquid crystalline and crystalline phase behaviour is observed. The introduction of a substantial fraction of a second shorter rigid unit containing side-chains leads to a broad endotherm in the d.s.c. scan covering some 150°C. Subsequent isothermal crystallization at any point within the broad endotherm leads to the generation of sharp endotherms at temperatures just above the recrystallization temperature. We attribute this behaviour to the crystallization of clusters of molecules containing similar random sequences. Such crystals are non-periodic along the chain direction.
Resumo:
We perturb the SC, BCC, and FCC crystal structures with a spatial Gaussian noise whose adimensional strength is controlled by the parameter a, and analyze the topological and metrical properties of the resulting Voronoi Tessellations (VT). The topological properties of the VT of the SC and FCC crystals are unstable with respect to the introduction of noise, because the corresponding polyhedra are geometrically degenerate, whereas the tessellation of the BCC crystal is topologically stable even against noise of small but finite intensity. For weak noise, the mean area of the perturbed BCC and FCC crystals VT increases quadratically with a. In the case of perturbed SCC crystals, there is an optimal amount of noise that minimizes the mean area of the cells. Already for a moderate noise (a>0.5), the properties of the three perturbed VT are indistinguishable, and for intense noise (a>2), results converge to the Poisson-VT limit. Notably, 2-parameter gamma distributions are an excellent model for the empirical of of all considered properties. The VT of the perturbed BCC and FCC structures are local maxima for the isoperimetric quotient, which measures the degre of sphericity of the cells, among space filling VT. In the BCC case, this suggests a weaker form of the recentluy disproved Kelvin conjecture. Due to the fluctuations of the shape of the cells, anomalous scalings with exponents >3/2 is observed between the area and the volumes of the cells, and, except for the FCC case, also for a->0. In the Poisson-VT limit, the exponent is about 1.67. As the number of faces is positively correlated with the sphericity of the cells, the anomalous scaling is heavily reduced when we perform powerlaw fits separately on cells with a specific number of faces.
Resumo:
Various studies investigating the future impacts of integrating high levels of renewable energy make use of historical meteorological (met) station data to produce estimates of future generation. Hourly means of 10m horizontal wind are extrapolated to a standard turbine hub height using the wind profile power or log law and used to simulate the hypothetical power output of a turbine at that location; repeating this procedure using many viable locations can produce a picture of future electricity generation. However, the estimate of hub height wind speed is dependent on the choice of the wind shear exponent a or the roughness length z0, and requires a number of simplifying assumptions. This paper investigates the sensitivity of this estimation on generation output using a case study of a met station in West Freugh, Scotland. The results show that the choice of wind shear exponent is a particularly sensitive parameter which can lead to significant variation of estimated hub height wind speed and hence estimated future generation potential of a region.
Resumo:
Traditional vaccines such as inactivated or live attenuated vaccines, are gradually giving way to more biochemically defined vaccines that are most often based on a recombinant antigen known to possess neutralizing epitopes. Such vaccines can offer improvements in speed, safety and manufacturing process but an inevitable consequence of their high degree of purification is that immunogenicity is reduced through the lack of the innate triggering molecules present in more complex preparations. Targeting recombinant vaccines to antigen presenting cells (APCs) such as dendritic cells however can improve immunogenicity by ensuring that antigen processing is as efficient as possible. Immune complexes, one of a number of routes of APC targeting, are mimicked by a recombinant approach, crystallizable fragment (Fc) fusion proteins, in which the target immunogen is linked directly to an antibody effector domain capable of interaction with receptors, FcR, on the APC cell surface. A number of virus Fc fusion proteins have been expressed in insect cells using the baculovirus expression system and shown to be efficiently produced and purified. Their use for immunization next to non-Fc tagged equivalents shows that they are powerfully immunogenic in the absence of added adjuvant and that immune stimulation is the result of the Fc-FcR interaction.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
Design summer years representing near-extreme hot summers have been used in the United Kingdom for the evaluation of thermal comfort and overheating risk. The years have been selected from measured weather data basically representative of an assumed stationary climate. Recent developments have made available ‘morphed’ equivalents of these years by shifting and stretching the measured variables using change factors produced by the UKCIP02 climate projections. The release of the latest, probabilistic, climate projections of UKCP09 together with the availability of a weather generator that can produce plausible daily or hourly sequences of weather variables has opened up the opportunity for generating new design summer years which can be used in risk-based decision-making. There are many possible methods for the production of design summer years from UKCP09 output: in this article, the original concept of the design summer year is largely retained, but a number of alternative methodologies for generating the years are explored. An alternative, more robust measure of warmth (weighted cooling degree hours) is also employed. It is demonstrated that the UKCP09 weather generator is capable of producing years for the baseline period, which are comparable with those in current use. Four methodologies for the generation of future years are described, and their output related to the future (deterministic) years that are currently available. It is concluded that, in general, years produced from the UKCP09 projections are warmer than those generated previously. Practical applications: The methodologies described in this article will facilitate designers who have access to the output of the UKCP09 weather generator (WG) to generate Design Summer Year hourly files tailored to their needs. The files produced will differ according to the methodology selected, in addition to location, emissions scenario and timeslice.
Resumo:
Meteorological (met) station data is used as the basis for a number of influential studies into the impacts of the variability of renewable resources. Real turbine output data is not often easy to acquire, whereas meteorological wind data, supplied at a standardised height of 10 m, is widely available. This data can be extrapolated to a standard turbine height using the wind profile power law and used to simulate the hypothetical power output of a turbine. Utilising a number of met sites in such a manner can develop a model of future wind generation output. However, the accuracy of this extrapolation is strongly dependent on the choice of the wind shear exponent alpha. This paper investigates the accuracy of the simulated generation output compared to reality using a wind farm in North Rhins, Scotland and a nearby met station in West Freugh. The results show that while a single annual average value for alpha may be selected to accurately represent the long term energy generation from a simulated wind farm, there are significant differences between simulation and reality on an hourly power generation basis, with implications for understanding the impact of variability of renewables on short timescales, particularly system balancing and the way that conventional generation may be asked to respond to a high level of variable renewable generation on the grid in the future.
Resumo:
As wind generation increases, system impact studies rely on predictions of future generation and effective representation of wind variability. A well-established approach to investigate the impact of wind variability is to simulate generation using observations from 10 m meteorological mast-data. However, there are problems with relying purely on historical wind-speed records or generation histories: mast-data is often incomplete, not sited at a relevant wind generation sites, and recorded at the wrong altitude above ground (usually 10 m), each of which may distort the generation profile. A possible complimentary approach is to use reanalysis data, where data assimilation techniques are combined with state-of-the-art weather forecast models to produce complete gridded wind time-series over an area. Previous investigations of reanalysis datasets have placed an emphasis on comparing reanalysis to meteorological site records whereas this paper compares wind generation simulated using reanalysis data directly against historic wind generation records. Importantly, this comparison is conducted using raw reanalysis data (typical resolution ∼50 km), without relying on a computationally expensive “dynamical downscaling” for a particular target region. Although the raw reanalysis data cannot, by nature of its construction, represent the site-specific effects of sub-gridscale topography, it is nevertheless shown to be comparable to or better than the mast-based simulation in the region considered and it is therefore argued that raw reanalysis data may offer a number of significant advantages as a data source.
Resumo:
Details are given of the development and application of a 2D depth-integrated, conformal boundary-fitted, curvilinear model for predicting the depth-mean velocity field and the spatial concentration distribution in estuarine and coastal waters. A numerical method for conformal mesh generation, based on a boundary integral equation formulation, has been developed. By this method a general polygonal region with curved edges can be mapped onto a regular polygonal region with the same number of horizontal and vertical straight edges and a multiply connected region can be mapped onto a regular region with the same connectivity. A stretching transformation on the conformally generated mesh has also been used to provide greater detail where it is needed close to the coast, with larger mesh sizes further offshore, thereby minimizing the computing effort whilst maximizing accuracy. The curvilinear hydrodynamic and solute model has been developed based on a robust rectilinear model. The hydrodynamic equations are approximated using the ADI finite difference scheme with a staggered grid and the solute transport equation is approximated using a modified QUICK scheme. Three numerical examples have been chosen to test the curvilinear model, with an emphasis placed on complex practical applications
Resumo:
The flow patterns generated by a pulsating jet used to study hydrodynamic modulated voltammetry (HMV) are investigated. It is shown that the pronounced edge effect reported previously is the result of the generation of a vortex ring from the pulsating jet. This vortex behaviour of the pulsating jet system is imaged using a number of visualisation techniques. These include a dye system and an electrochemically generated bubble stream. In each case a toroidal vortex ring was observed. Image analysis revealed that the velocity of this motion was of the order of 250 mm s−1 with a corresponding Reynolds number of the order of 1200. This motion, in conjunction with the electrode structure, is used to explain the strong ‘ring and halo’ features detected by electrochemical mapping of the system reported previously.
Resumo:
In the present paper we study the approximation of functions with bounded mixed derivatives by sparse tensor product polynomials in positive order tensor product Sobolev spaces. We introduce a new sparse polynomial approximation operator which exhibits optimal convergence properties in L2 and tensorized View the MathML source simultaneously on a standard k-dimensional cube. In the special case k=2 the suggested approximation operator is also optimal in L2 and tensorized H1 (without essential boundary conditions). This allows to construct an optimal sparse p-version FEM with sparse piecewise continuous polynomial splines, reducing the number of unknowns from O(p2), needed for the full tensor product computation, to View the MathML source, required for the suggested sparse technique, preserving the same optimal convergence rate in terms of p. We apply this result to an elliptic differential equation and an elliptic integral equation with random loading and compute the covariances of the solutions with View the MathML source unknowns. Several numerical examples support the theoretical estimates.
Resumo:
Age-related decline in the integrity of mitochondria is an important contributor to the human ageing process. In a number of ageing stem cell populations, this decline in mitochondrial function is due to clonal expansion of individual mitochondrial DNA (mtDNA) point mutations within single cells. However the dynamics of this process and when these mtDNA mutations occur initially are poorly understood. Using human colorectal epithelium as an exemplar tissue with a well-defined stem cell population, we analysed samples from 207 healthy participants aged 17-78 years using a combination of techniques (Random Mutation Capture, Next Generation Sequencing and mitochondrial enzyme histochemistry), and show that: 1) non-pathogenic mtDNA mutations are present from early embryogenesis or may be transmitted through the germline, whereas pathogenic mtDNA mutations are detected in the somatic cells, providing evidence for purifying selection in humans, 2) pathogenic mtDNA mutations are present from early adulthood (<20 years of age), at both low levels and as clonal expansions, 3) low level mtDNA mutation frequency does not change significantly with age, suggesting that mtDNA mutation rate does not increase significantly with age, and 4) clonally expanded mtDNA mutations increase dramatically with age. These data confirm that clonal expansion of mtDNA mutations, some of which are generated very early in life, is the major driving force behind the mitochondrial dysfunction associated with ageing of the human colorectal epithelium.