952 resultados para Three models
Resumo:
The viscoelastic properties of edible films can provide information at the structural level of the biopolymers used. The objective of this work was to test three simple models of linear viscoelastic theory (Maxwell, Generalized Maxwell with two units in parallel, and Burgers) using the results of stress relaxation tests in edible films of myofibrillar proteins of Nile Tilapia. The films were elaborated according to a casting technique and pre-conditioned at 58% relative humidity and 22ºC for 4 days. The testing sample (15mm x 118mm) was submitted to tests of stress relaxation in an equipment of physical measurements, TA.XT2i. The deformation, imposed to the sample, was 1%, guaranteeing the permanency in the domain of the linear viscoelasticity. The models were fitted to experimental data (stress x time) by nonlinear regression. The Generalized Maxwell model with two units in parallel and the Burgers model represented the relaxation curves of stress satisfactorily. The viscoelastic properties varied in a way that they were less dependent on the thickness of the films.
Resumo:
A series of inquiries and reports suggest considerable failings in the care provided to some patients in the NHS. Although the Bristol Inquiry report of 2001 led to the creation of many new regulatory bodies to supervise the NHS, they have never enjoyed consistent support from government and the Mid Staffordshire Inquiry in 2013 suggests they made little difference. Why do some parts of the NHS disregard patients’ interests and how we should we respond to the challenge? The following discusses the evolution of approaches to NHS governance through the Hippocratic, Managerial and Commercial models, and assesses their risks and benefits. Apart from the ethical imperative, the need for effective governance is driven both by the growth in information available to the public and the resources wasted by ineffective systems of care. Appropriate solutions depend on an understanding of the perverse incentives inherent in each model and the need for greater sensitivity to the voices of patients and the public.
Resumo:
artículo publicado en la revista Int Fam Plan Perspect. 2003 Sep;29(3):112-20
Resumo:
BACKGROUND: Both compulsory detoxification treatment and community-based methadone maintenance treatment (MMT) exist for heroin addicts in China. We aim to examine the effectiveness of three intervention models for referring heroin addicts released from compulsory detoxification centers to community methadone maintenance treatment (MMT) clinics in Dehong prefecture, Yunnan province, China. METHODS: Using a quasi-experimental study design, three different referral models were assigned to four detoxification centers. Heroin addicts were enrolled based on their fulfillment to eligibility criteria and provision of informed consent. Two months prior to their release, information on demographic characteristics, history of heroin use, and prior participation in intervention programs was collected via a survey, and blood samples were obtained for HIV testing. All subjects were followed for six months after release from detoxification centers. Multi-level logistic regression analysis was used to examine factors predicting successful referrals to MMT clinics. RESULTS: Of the 226 participants who were released and followed, 9.7% were successfully referred to MMT(16.2% of HIV-positive participants and 7.0% of HIV-negative participants). A higher proportion of successful referrals was observed among participants who received both referral cards and MMT treatment while still in detoxification centers (25.8%) as compared to those who received both referral cards and police-assisted MMT enrollment (5.4%) and those who received referral cards only (0%). Furthermore, those who received referral cards and MMT treatment while still in detoxification had increased odds of successful referral to an MMT clinic (adjusted OR = 1.2, CI = 1.1-1.3). Having participated in an MMT program prior to detention (OR = 1.5, CI = 1.3-1.6) was the only baseline covariate associated with increased odds of successful referral. CONCLUSION: Findings suggest that providing MMT within detoxification centers promotes successful referral of heroin addicts to community-based MMT upon their release.
Resumo:
Canopy interception of incident precipitation is a critical component of the forest water balance during each of the four seasons. Models have been developed to predict precipitation interception from standard meteorological variables because of acknowledged difficulty in extrapolating direct measurements of interception loss from forest to forest. No known study has compared and validated canopy interception models for a leafless deciduous forest stand in the eastern United States. Interception measurements from an experimental plot in a leafless deciduous forest in northeastern Maryland (39°42'N, 75°5'W) for 11 rainstorms in winter and early spring 2004/05 were compared to predictions from three models. The Mulder model maintains a moist canopy between storms. The Gash model requires few input variables and is formulated for a sparse canopy. The WiMo model optimizes the canopy storage capacity for the maximum wind speed during each storm. All models showed marked underestimates and overestimates for individual storms when the measured ratio of interception to gross precipitation was far more or less, respectively, than the specified fraction of canopy cover. The models predicted the percentage of total gross precipitation (PG) intercepted to within the probable standard error (8.1%) of the measured value: the Mulder model overestimated the measured value by 0.1% of PG; the WiMo model underestimated by 0.6% of PG; and the Gash model underestimated by 1.1% of PG. The WiMo model’s advantage over the Gash model indicates that the canopy storage capacity increases logarithmically with the maximum wind speed. This study has demonstrated that dormant-season precipitation interception in a leafless deciduous forest may be satisfactorily predicted by existing canopy interception models.
Resumo:
The performance of three urban land surface models, run in offline mode, with their default external parameters, is evaluated for two distinctly different sites in Helsinki: Torni and Kumpula. The former is a dense city centre site with 22% vegetation, while the latter is a suburban site with over 50% vegetation. At both locations the models are compared against sensible and latent heat fluxes measured using the eddy covariance technique, along with snow depth observations. The cold climate experienced by the city causes strong seasonal variations that include snow cover and stable atmospheric conditions. Most of the time the three models are able to account for the differences between the study areas as well as the seasonal and diurnal variability of the energy balance components. However, the performances are not systematic across the modelled components, season and surface type. The net all-wave radiation is well simulated, with the greatest uncertainties related to snowmelt timing, when the fraction of snow cover has a key role, particularly in determining the surface albedo. For the turbulent fluxes, more variation between the models is seen which can partly be explained by the different methods in their calculation and partly by surface parameter values. For the sensible heat flux, simulation of wintertime values was the main problem, which also leads to issues in predicting near-surface stabilities particularly at the dense city centre site. All models have the most difficulties in simulating latent heat flux. This study particularly emphasizes that improvements are needed in the parameterization of anthropogenic heat flux and thermal parameters in winter, snow cover in spring and evapotranspiration in order to improve the surface energy balance modelling in cold climate cities.
Resumo:
Regional climate change projections for the last half of the twenty-first century have been produced for South America, as part of the CREAS (Cenarios REgionalizados de Clima Futuro da America do Sul) regional project. Three regional climate models RCMs (Eta CCS, RegCM3 and HadRM3P) were nested within the HadAM3P global model. The simulations cover a 30-year period representing present climate (1961-1990) and projections for the IPCC A2 high emission scenario for 2071-2100. The focus was on the changes in the mean circulation and surface variables, in particular, surface air temperature and precipitation. There is a consistent pattern of changes in circulation, rainfall and temperatures as depicted by the three models. The HadRM3P shows intensification and a more southward position of the subtropical Pacific high, while a pattern of intensification/weakening during summer/winter is projected by the Eta CCS/RegCM3. There is a tendency for a weakening of the subtropical westerly jet from the Eta CCS and HadRM3P, consistent with other studies. There are indications that regions such of Northeast Brazil and central-eastern and southern Amazonia may experience rainfall deficiency in the future, while the Northwest coast of Peru-Ecuador and northern Argentina may experience rainfall excesses in a warmer future, and these changes may vary with the seasons. The three models show warming in the A2 scenario stronger in the tropical region, especially in the 5A degrees N-15A degrees S band, both in summer and especially in winter, reaching up to 6-8A degrees C warmer than in the present. In southern South America, the warming in summer varies between 2 and 4A degrees C and in winter between 3 and 5A degrees C in the same region from the 3 models. These changes are consistent with changes in low level circulation from the models, and they are comparable with changes in rainfall and temperature extremes reported elsewhere. In summary, some aspects of projected future climate change are quite robust across this set of model runs for some regions, as the Northwest coast of Peru-Ecuador, northern Argentina, Eastern Amazonia and Northeast Brazil, whereas for other regions they are less robust as in Pantanal region of West Central and southeastern Brazil.
Resumo:
The growing interest in quantifying the cultural and creative industries, visualize the economic contribution of activities related to culture demands first of all the construction of internationally comparable analysis frameworks. Currently there are three major bodies which address this issue and whose comparative study is the focus of this article: the UNESCO Framework for Cultural Statistics (FCS-2009), the European Framework for Cultural Statistics (ESSnet-Culture 2012) and the methodological resource of the “Convenio Andrés Bello” group for working with the Satellite Accounts on Culture in Ibero-America (CAB-2015). Cultural sector measurements provide the information necessary for correct planning of cultural policies which in turn leads to sustaining industries and promoting cultural diversity. The text identifies the existing differences in the three models and three levels of analysis, the sectors, the cultural activities and the criteria that each one uses in order to determine the distribution of the activities by sector. The end result leaves the impossibility of comparing cultural statistics of countries that implement different frameworks.
Resumo:
In this paper, we look at three models (mixture, competing risk and multiplicative) involving two inverse Weibull distributions. We study the shapes of the density and failure-rate functions and discuss graphical methods to determine if a given data set can be modelled by one of these models. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The use of bit error models in communication simulation has been widely studied. In this technical report we present three models: the Independent Channel Model; the Gilbert-Elliot Model and the Burst-Error Periodic Model.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores – Sistemas Digitais e Percepcionais pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ∼75% of the total exposure range, which corresponds to an exposure estimate of 20-22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well-documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART.
Resumo:
Tämä työ luo katsauksen ajallisiin ja stokastisiin ohjelmien luotettavuus malleihin sekä tutkii muutamia malleja käytännössä. Työn teoriaosuus sisältää ohjelmien luotettavuuden kuvauksessa ja arvioinnissa käytetyt keskeiset määritelmät ja metriikan sekä varsinaiset mallien kuvaukset. Työssä esitellään kaksi ohjelmien luotettavuusryhmää. Ensimmäinen ryhmä ovat riskiin perustuvat mallit. Toinen ryhmä käsittää virheiden ”kylvöön” ja merkitsevyyteen perustuvat mallit. Työn empiirinen osa sisältää kokeiden kuvaukset ja tulokset. Kokeet suoritettiin käyttämällä kolmea ensimmäiseen ryhmään kuuluvaa mallia: Jelinski-Moranda mallia, ensimmäistä geometrista mallia sekä yksinkertaista eksponenttimallia. Kokeiden tarkoituksena oli tutkia, kuinka syötetyn datan distribuutio vaikuttaa mallien toimivuuteen sekä kuinka herkkiä mallit ovat syötetyn datan määrän muutoksille. Jelinski-Moranda malli osoittautui herkimmäksi distribuutiolle konvergaatio-ongelmien vuoksi, ensimmäinen geometrinen malli herkimmäksi datan määrän muutoksille.