992 resultados para Incomplete model
Resumo:
BACKGROUND AND PURPOSE: Endovascular treatment of wide-neck bifurcation aneurysms often results in incomplete occlusion or aneurysm recurrence. The goals of this study were to compare results of coil embolization with or without the assistance of self-expandable stents and to examine how stents may influence neointima formation. MATERIALS AND METHODS: Wide-neck bifurcation aneurysms were constructed in 24 animals and, after 4-6 weeks, were randomly allocated to 1 of 5 groups: 1) coil embolization using the assistance of 1 braided stent (n = 5); 2) coil embolization using the assistance of 2 braided stents in a Y configuration (n = 5); 3) coil embolization without stent assistance (n = 6); 4) Y-stenting alone (n = 4); and 5) untreated controls (n = 4). Angiographic results were compared at baseline and at 12 weeks, by using an ordinal scale. Neointima formation at the neck at 12 weeks was compared among groups by using a semiquantitative grading scale. Bench studies were performed to assess stent porosities. RESULTS: Initial angiographic results were improved with single stent-assisted coiling compared with simple coiling (P = .013). Angiographic results at 12 weeks were improved with any stent assistance (P = .014). Neointimal closure of the aneurysm neck was similar with or without stent assistance (P = .908), with neointima covering coil loops but rarely stent struts. Y-stent placement alone had no therapeutic effect. Bench studies showed that porosities can be decreased with stent compaction, but a relatively stable porous transition zone was a limiting factor. CONCLUSIONS: Stent-assisted coiling may improve results of embolization by allowing more complete initial coiling, but these high-porosity stents did not provide a scaffold for more complete neointimal closure of aneurysms.
Resumo:
This article investigates the allocation of demand risk within an incomplete contract framework. We consider an incomplete contractual relationship between a public authority and a private provider (i.e. a public-private partnership), in which the latter invests in non-verifiable cost-reducing efforts and the former invests in non-verifiable adaptation efforts to respond to changing consumer demand over time. We show that the party that bears the demand risk has fewer hold-up opportunities and that this leads the other contracting party to make more effort. Thus, in our model, bearing less risk can lead to more effort, which we describe as a new example of âeuro~counter-incentivesâeuro?. We further show that when the benefits of adaptation are important, it is socially preferable to design a contract in which the demand risk remains with the private provider, whereas when the benefits of cost-reducing efforts are important, it is socially preferable to place the demand risk on the public authority. We then apply these results to explain two well-known case studies.
Resumo:
The temporal dynamics of species diversity are shaped by variations in the rates of speciation and extinction, and there is a long history of inferring these rates using first and last appearances of taxa in the fossil record. Understanding diversity dynamics critically depends on unbiased estimates of the unobserved times of speciation and extinction for all lineages, but the inference of these parameters is challenging due to the complex nature of the available data. Here, we present a new probabilistic framework to jointly estimate species-specific times of speciation and extinction and the rates of the underlying birth-death process based on the fossil record. The rates are allowed to vary through time independently of each other, and the probability of preservation and sampling is explicitly incorporated in the model to estimate the true lifespan of each lineage. We implement a Bayesian algorithm to assess the presence of rate shifts by exploring alternative diversification models. Tests on a range of simulated data sets reveal the accuracy and robustness of our approach against violations of the underlying assumptions and various degrees of data incompleteness. Finally, we demonstrate the application of our method with the diversification of the mammal family Rhinocerotidae and reveal a complex history of repeated and independent temporal shifts of both speciation and extinction rates, leading to the expansion and subsequent decline of the group. The estimated parameters of the birth-death process implemented here are directly comparable with those obtained from dated molecular phylogenies. Thus, our model represents a step towards integrating phylogenetic and fossil information to infer macroevolutionary processes.
Resumo:
The objective of this work was to compare the relative efficiency of initial selection and genetic parameter estimation, using augmented blocks design (ABD), augmented blocks twice replicated design (DABD) and group of randomised block design experiments with common treatments (ERBCT), by simulations, considering fixed effect model and mixed model with regular treatment effects as random. For the simulations, eight different conditions (scenarios) were considered. From the 600 simulations in each scenario, the mean percentage selection coincidence, the Pearsons´s correlation estimates between adjusted means for the fixed effects model, and the heritability estimates for the mixed model were evaluated. DABD and ERBCT were very similar in their comparisons and slightly superior to ABD. Considering the initial stages of selection in a plant breeding program, ABD is a good alternative for selecting superior genotypes, although none of the designs had been effective to estimate heritability in all the different scenarios evaluated.
Resumo:
General Introduction These three chapters, while fairly independent from each other, study economic situations in incomplete contract settings. They are the product of both the academic freedom my advisors granted me, and in this sense reflect my personal interests, and of their interested feedback. The content of each chapter can be summarized as follows: Chapter 1: Inefficient durable-goods monopolies In this chapter we study the efficiency of an infinite-horizon durable-goods monopoly model with a fmite number of buyers. We find that, while all pure-strategy Markov Perfect Equilibria (MPE) are efficient, there also exist previously unstudied inefficient MPE where high valuation buyers randomize their purchase decision while trying to benefit from low prices which are offered once a critical mass has purchased. Real time delay, an unusual monopoly distortion, is the result of this attrition behavior. We conclude that neither technological constraints nor concern for reputation are necessary to explain inefficiency in monopolized durable-goods markets. Chapter 2: Downstream mergers and producer's capacity choice: why bake a larger pie when getting a smaller slice? In this chapter we study the effect of downstream horizontal mergers on the upstream producer's capacity choice. Contrary to conventional wisdom, we find anon-monotonic relationship: horizontal mergers induce a higher upstream capacity if the cost of capacity is low, and a lower upstream capacity if this cost is high. We explain this result by decomposing the total effect into two competing effects: a change in hold-up and a change in bargaining erosion. Chapter 3: Contract bargaining with multiple agents In this chapter we study a bargaining game between a principal and N agents when the utility of each agent depends on all agents' trades with the principal. We show, using the Potential, that equilibria payoffs coincide with the Shapley value of the underlying coalitional game with an appropriately defined characteristic function, which under common assumptions coincides with the principal's equilibrium profit in the offer game. Since the problem accounts for differences in information and agents' conjectures, the outcome can be either efficient (e.g. public contracting) or inefficient (e.g. passive beliefs).
Resumo:
View angle and directional effects significantly affect reflectance and vegetation indices, especially when daily images collected by large field-of-view (FOV) sensors like the Moderate Resolution Imaging Spectroradiometer (MODIS) are used. In this study, the PROSAIL radiative transfer model was chosen to evaluate the impact of the geometry of data acquisition on soybean reflectance and two vegetation indices (Normalized Difference Vegetation Index - NDVI and Enhanced Vegetation Index -EVI) by varying biochemical and biophysical parameters of the crop. Input values for PROSAIL simulation were based on the literature and were adjusted by the comparison between simulated and real satellite soybean spectra acquired by the MODIS/Terra and hyperspectral Hyperion/Earth Observing-One (EO-1). Results showed that the influence of the view angle and view direction on reflectance was stronger with decreasing leaf area index (LAI) and chlorophyll concentration. Because of the greater dependence on the near-infrared reflectance, the EVI was much more sensitive to viewing geometry than NDVI presenting larger values in the backscattering direction. The contrary was observed for NDVI in the forward scattering direction. In relation to the LAI, NDVI was much more isotropic for closed soybean canopies than for incomplete canopies and a contrary behavior was verified for EVI.
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century1,2. Explorations of these uncertainties have so far relied on scaling approaches3,4, large ensembles of simplified climate models1,2, or small ensembles of complex coupled atmosphere–ocean general circulation models5,6 which under-represent uncertainties in key climate system properties derived from independent sources7–9. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report10, but extends towards larger warming than observed in ensemblesof-opportunity5 typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range ‘no mitigation’ scenario for greenhouse-gas emissions.
Resumo:
Gastrointestinal (GI) models that mimic physiological conditions in vitro are important tools for developing and optimizing biopharmaceutical formulations. Oral administration of live attenuated bacterial vaccines (LBV) can safely and effectively promote mucosal immunity but new formulations are required that provide controlled release of optimal numbers of viable bacterial cells, which must survive gastrointestinal transit overcoming various antimicrobial barriers. Here, we use a gastro-small intestine gut model of human GI conditions to study the survival and release kinetics of two oral LBV formulations: the licensed typhoid fever vaccine Vivotif comprising enteric coated capsules; and an experimental formulation of the model vaccine Salmonella Typhimurium SL3261 dried directly onto cast enteric polymer films and laminated to form a polymer film laminate (PFL). Neither formulation released significant numbers of viable cells when tested in the complete gastro-small intestine model. The poor performance in delivering viable cells could be attributed to a combination of acid and bile toxicity plus incomplete release of cells for Vivotif capsules, and to bile toxicity alone for PFL. To achieve effective protection from intestinal bile in addition to effective acid resistance, bile adsorbent resins were incorporated into the PFL to produce a new formulation, termed BR-PFL. Efficient and complete release of 4.4x107 live cells per dose was achieved from BR-PFL at distal intestinal pH, with release kinetics controlled by the composition of the enteric polymer film, and no loss in viability observed in any stage of the GI model. Use of this in vitro GI model thereby allowed rational design of an oral LBV formulation to maximize viable cell release.
Resumo:
In 2004 the National Household Survey (Pesquisa Nacional par Amostras de Domicilios - PNAD) estimated the prevalence of food and nutrition insecurity in Brazil. However, PNAD data cannot be disaggregated at the municipal level. The objective of this study was to build a statistical model to predict severe food insecurity for Brazilian municipalities based on the PNAD dataset. Exclusion criteria were: incomplete food security data (19.30%); informants younger than 18 years old (0.07%); collective households (0.05%); households headed by indigenous persons (0.19%). The modeling was carried out in three stages, beginning with the selection of variables related to food insecurity using univariate logistic regression. The variables chosen to construct the municipal estimates were selected from those included in PNAD as well as the 2000 Census. Multivariate logistic regression was then initiated, removing the non-significant variables with odds ratios adjusted by multiple logistic regression. The Wald Test was applied to check the significance of the coefficients in the logistic equation. The final model included the variables: per capita income; years of schooling; race and gender of the household head; urban or rural residence; access to public water supply; presence of children; total number of household inhabitants and state of residence. The adequacy of the model was tested using the Hosmer-Lemeshow test (p=0.561) and ROC curve (area=0.823). Tests indicated that the model has strong predictive power and can be used to determine household food insecurity in Brazilian municipalities, suggesting that similar predictive models may be useful tools in other Latin American countries.
Resumo:
P>In the context of either Bayesian or classical sensitivity analyses of over-parametrized models for incomplete categorical data, it is well known that prior-dependence on posterior inferences of nonidentifiable parameters or that too parsimonious over-parametrized models may lead to erroneous conclusions. Nevertheless, some authors either pay no attention to which parameters are nonidentifiable or do not appropriately account for possible prior-dependence. We review the literature on this topic and consider simple examples to emphasize that in both inferential frameworks, the subjective components can influence results in nontrivial ways, irrespectively of the sample size. Specifically, we show that prior distributions commonly regarded as slightly informative or noninformative may actually be too informative for nonidentifiable parameters, and that the choice of over-parametrized models may drastically impact the results, suggesting that a careful examination of their effects should be considered before drawing conclusions.Resume Que ce soit dans un cadre Bayesien ou classique, il est bien connu que la surparametrisation, dans les modeles pour donnees categorielles incompletes, peut conduire a des conclusions erronees. Cependant, certains auteurs persistent a negliger les problemes lies a la presence de parametres non identifies. Nous passons en revue la litterature dans ce domaine, et considerons quelques exemples surparametres simples dans lesquels les elements subjectifs influencent de facon non negligeable les resultats, independamment de la taille des echantillons. Plus precisement, nous montrons comment des a priori consideres comme peu ou non-informatifs peuvent se reveler extremement informatifs en ce qui concerne les parametres non identifies, et que le recours a des modeles surparametres peut avoir sur les conclusions finales un impact considerable. Ceci suggere un examen tres attentif de l`impact potentiel des a priori.
Resumo:
The Grubbs` measurement model is frequently used to compare several measuring devices. It is common to assume that the random terms have a normal distribution. However, such assumption makes the inference vulnerable to outlying observations, whereas scale mixtures of normal distributions have been an interesting alternative to produce robust estimates, keeping the elegancy and simplicity of the maximum likelihood theory. The aim of this paper is to develop an EM-type algorithm for the parameter estimation, and to use the local influence method to assess the robustness aspects of these parameter estimates under some usual perturbation schemes, In order to identify outliers and to criticize the model building we use the local influence procedure in a Study to compare the precision of several thermocouples. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The p-median model is used to locate P facilities to serve a geographically distributed population. Conventionally, it is assumed that the population always travels to the nearest facility. Drezner and Drezner (2006, 2007) provide three arguments on why this assumption might be incorrect, and they introduce the extended the gravity p-median model to relax the assumption. We favour the gravity p-median model, but we note that in an applied setting, Drezner and Drezner’s arguments are incomplete. In this communication, we point at the existence of a fourth compelling argument for the gravity p-median model.
Resumo:
BACKGROUND: Misoprostol is established for the treatment of incomplete abortion but has not been systematically assessed when provided by midwives at district level in a low-resource setting. We investigated the effectiveness and safety of midwives diagnosing and treating incomplete abortion with misoprostol, compared with physicians. METHODS: We did a multicentre randomised controlled equivalence trial at district level at six facilities in Uganda. Eligibility criteria were women with signs of incomplete abortion. We randomly allocated women with first-trimester incomplete abortion to clinical assessment and treatment with misoprostol either by a physician or a midwife. The randomisation (1:1) was done in blocks of 12 and was stratified for study site. Primary outcome was complete abortion not needing surgical intervention within 14-28 days after initial treatment. The study was not masked. Analysis of the primary outcome was done on the per-protocol population with a generalised linear-mixed effects model. The predefined equivalence range was -4% to 4%. The trial was registered at ClinicalTrials.gov, number NCT01844024. FINDINGS: From April 30, 2013, to July 21, 2014, 1108 women were assessed for eligibility. 1010 women were randomly assigned to each group (506 to midwife group and 504 to physician group). 955 women (472 in the midwife group and 483 in the physician group) were included in the per-protocol analysis. 452 (95·8%) of women in the midwife group had complete abortion and 467 (96·7%) in the physician group. The model-based risk difference for midwife versus physician group was -0·8% (95% CI -2·9 to 1·4), falling within the predefined equivalence range (-4% to 4%). The overall proportion of women with incomplete abortion was 3·8% (36/955), similarly distributed between the two groups (4·2% [20/472] in the midwife group, 3·3% [16/483] in the physician group). No serious adverse events were recorded. INTERPRETATION: Diagnosis and treatment of incomplete abortion with misoprostol by midwives is equally safe and effective as when provided by physicians, in a low-resource setting. Scaling up midwives' involvement in treatment of incomplete abortion with misoprostol at district level would increase access to safe post-abortion care. FUNDING: The Swedish Research Council, Karolinska Institutet, and Dalarna University.