929 resultados para Inpatients - statistics and numerical data
Resumo:
En synthèse d’images, reproduire les effets complexes de la lumière sur des matériaux transluminescents, tels que la cire, le marbre ou la peau, contribue grandement au réalisme d’une image. Malheureusement, ce réalisme supplémentaire est couteux en temps de calcul. Les modèles basés sur la théorie de la diffusion visent à réduire ce coût en simulant le comportement physique du transport de la lumière sous surfacique tout en imposant des contraintes de variation sur la lumière incidente et sortante. Une composante importante de ces modèles est leur application à évaluer hiérarchiquement l’intégrale numérique de l’illumination sur la surface d’un objet. Cette thèse révise en premier lieu la littérature actuelle sur la simulation réaliste de la transluminescence, avant d’investiguer plus en profondeur leur application et les extensions des modèles de diffusion en synthèse d’images. Ainsi, nous proposons et évaluons une nouvelle technique d’intégration numérique hiérarchique utilisant une nouvelle analyse fréquentielle de la lumière sortante et incidente pour adapter efficacement le taux d’échantillonnage pendant l’intégration. Nous appliquons cette théorie à plusieurs modèles qui correspondent à l’état de l’art en diffusion, octroyant une amélioration possible à leur efficacité et précision.
Resumo:
The Kineticist's Workbench is a program that simulates chemical reaction mechanisms by predicting, generating, and interpreting numerical data. Prior to simulation, it analyzes a given mechanism to predict that mechanism's behavior; it then simulates the mechanism numerically; and afterward, it interprets and summarizes the data it has generated. In performing these tasks, the Workbench uses a variety of techniques: graph- theoretic algorithms (for analyzing mechanisms), traditional numerical simulation methods, and algorithms that examine simulation results and reinterpret them in qualitative terms. The Workbench thus serves as a prototype for a new class of scientific computational tools---tools that provide symbiotic collaborations between qualitative and quantitative methods.
Resumo:
Health care providers, purchasers and policy makers need to make informed decisions regarding the provision of cost-effective care. When a new health care intervention is to be compared with the current standard, an economic evaluation alongside an evaluation of health benefits provides useful information for the decision making process. We consider the information on cost-effectiveness which arises from an individual clinical trial comparing the two interventions. Recent methods for conducting a cost-effectiveness analysis for a clinical trial have focused on the net benefit parameter. The net benefit parameter, a function of costs and health benefits, is positive if the new intervention is cost-effective compared with the standard. In this paper we describe frequentist and Bayesian approaches to cost-effectiveness analysis which have been suggested in the literature and apply them to data from a clinical trial comparing laparoscopic surgery with open mesh surgery for the repair of inguinal hernias. We extend the Bayesian model to allow the total cost to be divided into a number of different components. The advantages and disadvantages of the different approaches are discussed. In January 2001, NICE issued guidance on the type of surgery to be used for inguinal hernia repair. We discuss our example in the light of this information. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
Ice cloud representation in general circulation models remains a challenging task, due to the lack of accurate observations and the complexity of microphysical processes. In this article, we evaluate the ice water content (IWC) and ice cloud fraction statistical distributions from the numerical weather prediction models of the European Centre for Medium-Range Weather Forecasts (ECMWF) and the UK Met Office, exploiting the synergy between the CloudSat radar and CALIPSO lidar. Using the last three weeks of July 2006, we analyse the global ice cloud occurrence as a function of temperature and latitude and show that the models capture the main geographical and temperature-dependent distributions, but overestimate the ice cloud occurrence in the Tropics in the temperature range from −60 °C to −20 °C and in the Antarctic for temperatures higher than −20 °C, but underestimate ice cloud occurrence at very low temperatures. A global statistical comparison of the occurrence of grid-box mean IWC at different temperatures shows that both the mean and range of IWC increases with increasing temperature. Globally, the models capture most of the IWC variability in the temperature range between −60 °C and −5 °C, and also reproduce the observed latitudinal dependencies in the IWC distribution due to different meteorological regimes. Two versions of the ECMWF model are assessed. The recent operational version with a diagnostic representation of precipitating snow and mixed-phase ice cloud fails to represent the IWC distribution in the −20 °C to 0 °C range, but a new version with prognostic variables for liquid water, ice and snow is much closer to the observed distribution. The comparison of models and observations provides a much-needed analysis of the vertical distribution of IWC across the globe, highlighting the ability of the models to reproduce much of the observed variability as well as the deficiencies where further improvements are required.
Resumo:
The MATLAB model is contained within the compressed folders (versions are available as .zip and .tgz). This model uses MERRA reanalysis data (>34 years available) to estimate the hourly aggregated wind power generation for a predefined (fixed) distribution of wind farms. A ready made example is included for the wind farm distribution of Great Britain, April 2014 ("CF.dat"). This consists of an hourly time series of GB-total capacity factor spanning the period 1980-2013 inclusive. Given the global nature of reanalysis data, the model can be applied to any specified distribution of wind farms in any region of the world. Users are, however, strongly advised to bear in mind the limitations of reanalysis data when using this model/data. This is discussed in our paper: Cannon, Brayshaw, Methven, Coker, Lenaghan. "Using reanalysis data to quantify extreme wind power generation statistics: a 33 year case study in Great Britain". Submitted to Renewable Energy in March, 2014. Additional information about the model is contained in the model code itself, in the accompanying ReadMe file, and on our website: http://www.met.reading.ac.uk/~energymet/data/Cannon2014/
Resumo:
P>In livestock genetic resource conservation, decision making about conservation priorities is based on the simultaneous analysis of several different criteria that may contribute to long-term sustainable breeding conditions, such as genetic and demographic characteristics, environmental conditions, and role of the breed in the local or regional economy. Here we address methods to integrate different data sets and highlight problems related to interdisciplinary comparisons. Data integration is based on the use of geographic coordinates and Geographic Information Systems (GIS). In addition to technical problems related to projection systems, GIS have to face the challenging issue of the non homogeneous scale of their data sets. We give examples of the successful use of GIS for data integration and examine the risk of obtaining biased results when integrating datasets that have been captured at different scales.
Resumo:
This paper presents a comparative analysis between the experimental characterization and the numerical simulation results for a three-dimensional FCC photonic crystal (PhC) based on a self-assembly synthesis of monodispersive latex spheres. Specifically, experimental optical characterization, by means of reflectance measurements under variable angles over the lattice plane family [1,1, 1], are compared to theoretical calculations based on the Finite Di®erence Time Domain (FDTD) method, in order to investigate the correlation between theoretical predictions and experimental data. The goal is to highlight the influence of crystal defects on the achieved performance.
Resumo:
Predicting and mapping productivity areas allows crop producers to improve their planning of agricultural activities. The primary aims of this work were the identification and mapping of specific management areas allowing coffee bean quality to be predicted from soil attributes and their relationships to relief. The study area was located in the Southeast of the Minas Gerais state, Brazil. A grid containing a total of 145 uniformly spaced nodes 50 m apart was established over an area of 31. 7 ha from which samples were collected at depths of 0. 00-0. 20 m in order to determine physical and chemical attributes of the soil. These data were analysed in conjunction with plant attributes including production, proportion of beans retained by different sieves and drink quality. The results of principal component analysis (PCA) in combination with geostatistical data showed the attributes clay content and available iron to be the best choices for identifying four crop production environments. Environment A, which exhibited high clay and available iron contents, and low pH and base saturation, was that providing the highest yield (30. 4l ha-1) and best coffee beverage quality (61 sacks ha-1). Based on the results, we believe that multivariate analysis, geostatistics and the soil-relief relationships contained in the digital elevation model (DEM) can be effectively used in combination for the hybrid mapping of areas of varying suitability for coffee production. © 2012 Springer Science+Business Media New York.
Resumo:
The major contribution of this paper relates to the practical advantages of combining Ground Control Points (GCPs), Ground Control Lines (GCLs) and orbital data to estimate the exterior orientation parameters of images collected by CBERS-2B (China-Brazil Earth Resources Satellite) HRC (High-resolution Camera) and CCD (High-resolution CCD Camera) sensors. Although the CBERS-2B is no longer operational, its images are still being used in Brazil, and the next generations of the CBERS satellite will have sensors with similar technical features, which motivates the study presented in this paper. The mathematical models that relate the object and image spaces are based on collinearity (for points) and coplanarity (for lines) conditions. These models were created in an in-house developed software package called TMS (Triangulation with Multiple Sensors) with multi-feature control (GCPs and GCLs). Experiments on a block of four CBERS-2B HRC images and on one CBERS-2B CCD image were performed using both models. It was observed that the combination of GCPs and GCLs provided better bundle block adjustment results than conventional bundle adjustment using only GCPs. The results also demonstrate the advantages of using primarily orbital data when the number of control entities is reduced. © 2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).
Resumo:
Some escape and dynamical properties for a beam of light inside a corrugated waveguide are discussed by using Fresnel reflectance. The system is described by a mapping and is controlled by a parameter δ defining a transition from integrability (δ = 0) to non integrability (δ ≠ 0). The phase space is mixed containing periodic islands, chaotic seas and invariant tori. The histogram of escaping orbits is shown to be scaling invariant with respect to δ. The waveguide is immersed in a region with different refractive index. Different optical materials are used to overcame the results. © 2013 Elsevier B.V. All rights reserved.
Resumo:
Aedes aegypti is the most important vector of dengue viruses in tropical and subtropical regions. Because vaccines are still under development, dengue prevention depends primarily on vector control. Population genetics is a common approach in research involving Ae. aegypti. In the context of medical entomology, wing morphometric analysis has been proposed as a strong and low-cost complementary tool for investigating population structure. Therefore, we comparatively evaluated the genetic and phenotypic variability of population samples of Ae. aegypti from four sampling sites in the metropolitan area of Sao Paulo city, Brazil. The distances between the sites ranged from 7.1 to 50 km. This area, where knowledge on the population genetics of this mosquito is incipient, was chosen due to the thousands of dengue cases registered yearly. The analysed loci were polymorphic, and they revealed population structure (global F-ST = 0.062; p < 0.05) and low levels of gene flow (Nm = 0.47) between the four locations. Principal component and discriminant analyses of wing shape variables (18 landmarks) demonstrated that wing polymorphisms were only slightly more common between populations than within populations. Whereas microsatellites allowed for geographic differentiation, wing geometry failed to distinguish the samples. These data suggest that microevolution in this species may affect genetic and morphological characters to different degrees. In this case, wing shape was not validated as a marker for assessing population structure. According to the interpretation of a previous report, the wing shape of Ae. aegypti does not vary significantly because it is stabilised by selective pressure. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The need for biodiversity conservation is increasing at a rate much faster than the acquisition of knowledge of biodiversity, such as descriptions of new species and mapping species distributions. As global changes are winning the race against the acquisition of knowledge, many researchers resort to the use of surrogate groups to aid in conservation decisions. Reductions in taxonomic and numerical resolution are also desirable, because they could allow more rapid the acquisition of knowledge while requiring less effort, if little important information is lost. In this study, we evaluated the congruence among 22 taxonomic groups sampled in a tropical forest in the Amazon basin. Our aim was to evaluate if any of these groups could be used as surrogates for the others in monitoring programs. We also evaluated if the taxonomic or numerical resolution of possible surrogates could be reduced without greatly reducing the overall congruence. Congruence among plant groups was high, whereas the congruence among most animal groups was very low, except for anurans in which congruence values were only slightly lower than for plants. Liana (Bignoniaceae) was the group with highest congruence, even using genera presence-absence data. The congruence among groups was related to environmental factors, specifically the clay and phosphorous contents of soil. Several groups showed strong spatial clumping, but this was unrelated to the congruence among groups. The high degree of congruence of lianas with the other groups suggests that it may be a reasonable surrogate group, mainly for the other plant groups analyzed, if soil data are not available. Although lianas are difficult to count and identify, the number of studies on the ecology of lianas is increasing. Most of these studies have concluded that lianas are increasing in abundance in tropical forests. In addition to the high congruence, lianas are worth monitoring in their own right because they are sensitive to global warming and the increasing frequency and severity of droughts in tropical regions. Our findings suggest that the use of data on surrogate groups with relatively low taxonomic and numerical resolutions can be a reliable shortcut for biodiversity assessments, especially in megadiverse areas with high rates of habitat conversion, where the lack of biodiversity knowledge is pervasive. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
A study of maar-diatreme volcanoes has been perfomed by inversion of gravity and magnetic data. The geophysical inverse problem has been solved by means of the damped nonlinear least-squares method. To ensure stability and convergence of the solution of the inverse problem, a mathematical tool, consisting in data weighting and model scaling, has been worked out. Theoretical gravity and magnetic modeling of maar-diatreme volcanoes has been conducted in order to get information, which is used for a simple rough qualitative and/or quantitative interpretation. The information also serves as a priori information to design models for the inversion and/or to assist the interpretation of inversion results. The results of theoretical modeling have been used to roughly estimate the heights and the dip angles of the walls of eight Eifel maar-diatremes — each taken as a whole. Inversemodeling has been conducted for the Schönfeld Maar (magnetics) and the Hausten-Morswiesen Maar (gravity and magnetics). The geometrical parameters of these maars, as well as the density and magnetic properties of the rocks filling them, have been estimated. For a reliable interpretation of the inversion results, beside the knowledge from theoretical modeling, it was resorted to other tools such like field transformations and spectral analysis for complementary information. Geologic models, based on thesynthesis of the respective interpretation results, are presented for the two maars mentioned above. The results gave more insight into the genesis, physics and posteruptive development of the maar-diatreme volcanoes. A classification of the maar-diatreme volcanoes into three main types has been elaborated. Relatively high magnetic anomalies are indicative of scoria cones embeded within maar-diatremes if they are not caused by a strong remanent component of the magnetization. Smaller (weaker) secondary gravity and magnetic anomalies on the background of the main anomaly of a maar-diatreme — especially in the boundary areas — are indicative for subsidence processes, which probably occurred in the late sedimentation phase of the posteruptive development. Contrary to postulates referring to kimberlite pipes, there exists no generalized systematics between diameter and height nor between geophysical anomaly and the dimensions of the maar-diatreme volcanoes. Although both maar-diatreme volcanoes and kimberlite pipes are products of phreatomagmatism, they probably formed in different thermodynamic and hydrogeological environments. In the case of kimberlite pipes, large amounts of magma and groundwater, certainly supplied by deep and large reservoirs, interacted under high pressure and temperature conditions. This led to a long period phreatomagmatic process and hence to the formation of large structures. Concerning the maar-diatreme and tuff-ring-diatreme volcanoes, the phreatomagmatic process takes place due to an interaction between magma from small and shallow magma chambers (probably segregated magmas) and small amounts of near-surface groundwater under low pressure and temperature conditions. This leads to shorter time eruptions and consequently to structures of smaller size in comparison with kimberlite pipes. Nevertheless, the results show that the diameter to height ratio for 50% of the studied maar-diatremes is around 1, whereby the dip angle of the diatreme walls is similar to that of the kimberlite pipes and lies between 70 and 85°. Note that these numerical characteristics, especially the dip angle, hold for the maars the diatremes of which — estimated by modeling — have the shape of a truncated cone. This indicates that the diatreme can not be completely resolved by inversion.
Resumo:
As land is developed, the impervious surfaces that are created increase the amount of runoff during rainfall events, disrupting the natural hydrologic cycle, with an increment in volume of runoff and in pollutant loadings. Pollutants deposited or derived from an activity on the land surface will likely end up in stormwater runoff in some concentration, such as nutrients, sediment, heavy metals, hydrocarbons, gasoline additives, pathogens, deicers, herbicides and pesticides. Several of these pollutants are particulate-bound, so it appears clear that sediment removal can provide significant water-quality improvements and it appears to be important the knowledge of the ability of stromwater treatment devices to retain particulate matter. For this reason three different units which remove sediments have been tested through laboratory. In particular a roadside gully pot has been tested under steady hydraulic conditions, varying the characteristics of the influent solids (diameter, particle size distribution and specific gravity). The efficiency in terms of particles retained has been evaluated as a function of influent flow rate and particles characteristics; results have been compared to efficiency evaluated applying an overflow rate model. Furthermore the role of particles settling velocity in efficiency determination has been investigated. After the experimental runs on the gully pot, a standard full-scale model of an hydrodynamic separator (HS) has been tested under unsteady influent flow rate condition, and constant solid concentration at the input. The results presented in this study illustrate that particle separation efficiency of the unit is predominately influenced by operating flow rate, which strongly affects the particles and hydraulic residence time of the system. The efficiency data have been compared to results obtained from a modified overflow rate model; moreover the residence time distribution has been experimentally determined through tracer analyses for several steady flow rates. Finally three testing experiments have been performed for two different configurations of a full-scale model of a clarifier (linear and crenulated) under unsteady influent flow rate condition, and constant solid concentration at the input. The results illustrate that particle separation efficiency of the unit is predominately influenced by the configuration of the unit itself. Turbidity measures have been used to compare turbidity with the suspended sediments concentration, in order to find a correlation between these two values, which can allow to have a measure of the sediments concentration simply installing a turbidity probe.