845 resultados para declarative, procedural, and reflective (DPR) model
Resumo:
The Gallus gallus (chicken) embryo is a central model organism in evolutionary developmental biology. Its anatomy and developmental genetics have been extensively studied and many relevant evolutionary implications have been made so far. However, important questions regarding the developmental origin of the chicken skull bones are still unresolved such that no solid homology can be established across organisms. This precludes evolutionary comparisons between this and other avian model systems in which skull anatomy has evolved significantly over the last millions of years.(...)
Resumo:
Publicado em "AIP Conference Proceedings", Vol. 1648
Resumo:
A summary is presented of ATLAS searches for gluinos and first- and second-generation squarks in final states containing jets and missing transverse momentum, with or without leptons or b-jets, in the s√=8 TeV data set collected at the Large Hadron Collider in 2012. This paper reports the results of new interpretations and statistical combinations of previously published analyses, as well as a new analysis. Since no significant excess of events over the Standard Model expectation is observed, the data are used to set limits in a variety of models. In all the considered simplified models that assume R-parity conservation, the limit on the gluino mass exceeds 1150 GeV at 95% confidence level, for an LSP mass smaller than 100 GeV. Furthermore, exclusion limits are set for left-handed squarks in a phenomenological MSSM model, a minimal Supergravity/Constrained MSSM model, R-parity-violation scenarios, a minimal gauge-mediated supersymmetry breaking model, a natural gauge mediation model, a non-universal Higgs mass model with gaugino mediation and a minimal model of universal extra dimensions.
Resumo:
Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)
Resumo:
Relatório de estágio de mestrado em Ensino de Informática
Resumo:
The main objective of this thesis on flooding was to produce a detailed report on flooding with specific reference to the Clare River catchment. Past flooding in the Clare River catchment was assessed with specific reference to the November 2009 flood event. A Geographic Information System was used to produce a graphical representation of the spatial distribution of the November 2009 flood. Flood risk is prominent within the Clare River catchment especially in the region of Claregalway. The recent flooding events of November 2009 produced significant fluvial flooding from the Clare River. This resulted in considerable flood damage to property. There were also hidden costs such as the economic impact of the closing of the N17 until floodwater subsided. Land use and channel conditions are traditional factors that have long been recognised for their effect on flooding processes. These factors were examined in the context of the Clare River catchment to determine if they had any significant effect on flood flows. Climate change has become recognised as a factor that may produce more significant and frequent flood events in the future. Many experts feel that climate change will result in an increase in the intensity and duration of rainfall in western Ireland. This would have significant implications for the Clare River catchment, which is already vulnerable to flooding. Flood estimation techniques are a key aspect in understanding and preparing for flood events. This study uses methods based on the statistical analysis of recorded data and methods based on a design rainstorm and rainfall-runoff model to estimate flood flows. These provide a mathematical basis to evaluate the impacts of various factors on flooding and also to generate practical design floods, which can be used in the design of flood relief measures. The final element of the thesis includes the author’s recommendations on how flood risk management techniques can reduce existing flood risk in the Clare River catchment. Future implications to flood risk due to factors such as climate change and poor planning practices are also considered.
Resumo:
Background: End-stage kidney disease patients continue to have markedly increased cardiovascular disease morbidity and mortality. Analysis of genetic factors connected with the renin-angiotensin system that influences the survival of the patients with end-stage kidney disease supports the ongoing search for improved outcomes. Objective: To assess survival and its association with the polymorphism of renin-angiotensin system genes: angiotensin I-converting enzyme insertion/deletion and angiotensinogen M235T in patients undergoing hemodialysis. Methods: Our study was designed to examine the role of renin-angiotensin system genes. It was an observational study. We analyzed 473 chronic hemodialysis patients in four dialysis units in the state of Rio de Janeiro. Survival rates were calculated by the Kaplan-Meier method and the differences between the curves were evaluated by Tarone-Ware, Peto-Prentice, and log rank tests. We also used logistic regression analysis and the multinomial model. A p value ≤ 0.05 was considered to be statistically significant. The local medical ethics committee gave their approval to this study. Results: The mean age of patients was 45.8 years old. The overall survival rate was 48% at 11 years. The major causes of death were cardiovascular diseases (34%) and infections (15%). Logistic regression analysis found statistical significance for the following variables: age (p = 0.000038), TT angiotensinogen (p = 0.08261), and family income greater than five times the minimum wage (p = 0.03089), the latter being a protective factor. Conclusions: The survival of hemodialysis patients is likely to be influenced by the TT of the angiotensinogen M235T gene.
Resumo:
Introduction: Obesity-related comorbidities are present in young obese children, providing a platform for early adult cardiovascular disorders. Objectives: To compare and correlate markers of adiposity to metabolic disturbances, vascular and cardiac morphology in a European pediatric obese cohort. Methods: We carried out an observational and transversal analysis in a cohort consisting of 121 obese children of both sexes, between the ages of 6 and 17 years. The control group consisted of 40 children with normal body mass index within the same age range. Markers of adiposity, plasma lipids and lipoproteins, homeostasis model assessment-insulin resistance, common carotid artery intima-media thickness and left ventricular diameters were analyzed. Results: There were statistically significant differences between the control and obese groups for the variables analyzed, all higher in the obese group, except for age, high-density lipoprotein cholesterol and adiponectin, higher in the control group. In the obese group, body mass index was directly correlated to left ventricular mass (r=0.542; p=0.001), the homeostasis model assessment-insulin resistance (r=0.378; p=<0.001) and mean common carotid artery intima-media thickness (r=0.378; p=<0.001). In that same group, insulin resistance was present in 38.1%, 12.5% had a combined dyslipidemic pattern, and eccentric hypertrophy was the most common left ventricular geometric pattern. Conclusions: These results suggest that these markers may be used in clinical practice to stratify cardiovascular risk, as well as to assess the impact of weight control programs.
Resumo:
This paper assesses empirically the importance of size discrimination and disaggregate data for deciding where to locate a start-up concern. We compare three econometric specifications using Catalan data: a multinomial logit with 4 and 41 alternatives (provinces and comarques, respectively) in which firm size is the main covariate; a conditional logit with 4 and 41 alternatives including attributes of the sites as well as size-site interactions; and a Poisson model on the comarques and the full spatial choice set (942 municipalities) with site-specific variables. Our results suggest that if these two issues are ignored, conclusions may be misleading. We provide evidence that large and small firms behave differently and conclude that Catalan firms tend to choose between comarques rather than between municipalities. Moreover, labour-intensive firms seem more likely to be located in the city of Barcelona. Keywords: Catalonia, industrial location, multinomial response model. JEL: C250, E30, R00, R12
Resumo:
We motivate procedural fairness for matching mechanisms and study two procedurally fair and stable mechanisms: employment by lotto (Aldershof et al., 1999) and the random order mechanism (Roth and Vande Vate, 1990, Ma, 1996). For both mechanisms we give various examples of probability distributions on the set of stable matchings and discuss properties that differentiate employment by lotto and the random order mechanism. Finally, we consider an adjustment of the random order mechanism, the equitable random order mechanism, that combines aspects of procedural and "endstate'' fairness. Aldershof et al. (1999) and Ma (1996) that exist on the probability distribution induced by both mechanisms. Finally, we consider an adjustment of the random order mechanism, the equitable random order mechanism.
Resumo:
INTRODUCTION: Hip fractures are responsible for excessive mortality, decreasing the 5-year survival rate by about 20%. From an economic perspective, they represent a major source of expense, with direct costs in hospitalization, rehabilitation, and institutionalization. The incidence rate sharply increases after the age of 70, but it can be reduced in women aged 70-80 years by therapeutic interventions. Recent analyses suggest that the most efficient strategy is to implement such interventions in women at the age of 70 years. As several guidelines recommend bone mineral density (BMD) screening of postmenopausal women with clinical risk factors, our objective was to assess the cost-effectiveness of two screening strategies applied to elderly women aged 70 years and older. METHODS: A cost-effectiveness analysis was performed using decision-tree analysis and a Markov model. Two alternative strategies, one measuring BMD of all women, and one measuring BMD only of those having at least one risk factor, were compared with the reference strategy "no screening". Cost-effectiveness ratios were measured as cost per year gained without hip fracture. Most probabilities were based on data observed in EPIDOS, SEMOF and OFELY cohorts. RESULTS: In this model, which is mostly based on observed data, the strategy "screen all" was more cost effective than "screen women at risk." For one woman screened at the age of 70 and followed for 10 years, the incremental (additional) cost-effectiveness ratio of these two strategies compared with the reference was 4,235 euros and 8,290 euros, respectively. CONCLUSION: The results of this model, under the assumptions described in the paper, suggest that in women aged 70-80 years, screening all women with dual-energy X-ray absorptiometry (DXA) would be more effective than no screening or screening only women with at least one risk factor. Cost-effectiveness studies based on decision-analysis trees maybe useful tools for helping decision makers, and further models based on different assumptions should be performed to improve the level of evidence on cost-effectiveness ratios of the usual screening strategies for osteoporosis.
Resumo:
The paper incorporates house prices within an NEG framework leading to the spatial distributions of wages, prices and income. The model assumes that all expenditure goes to firms under a monopolistic competition market structure, that labour efficiency units are appropriate, and that spatial equilibrium exists. The house price model coefficients are estimated outside the NEG model, allowing an econometric analysis of the significance of relevant covariates. The paper illustrates the methodology by estimating wages, income and prices for small administrative areas in Great Britain, and uses the model to simulate the effects of an exogenous employment shock.
Resumo:
Report for the scientific sojourn carried out at the Université Catholique de Louvain, Belgium, from March until June 2007. In the first part, the impact of important geometrical parameters such as source and drain thickness, fin spacing, spacer width, etc. on the parasitic fringing capacitance component of multiple-gate field-effect transistors (MuGFET) is deeply analyzed using finite element simulations. Several architectures such as single gate, FinFETs (double gate), triple-gate represented by Pi-gate MOSFETs are simulated and compared in terms of channel and fringing capacitances for the same occupied die area. Simulations highlight the great impact of diminishing the spacing between fins for MuGFETs and the trade-off between the reduction of parasitic source and drain resistances and the increase of fringing capacitances when Selective Epitaxial Growth (SEG) technology is introduced. The impact of these technological solutions on the transistor cut-off frequencies is also discussed. The second part deals with the study of the effect of the volume inversion (VI) on the capacitances of undoped Double-Gate (DG) MOSFETs. For that purpose, we present simulation results for the capacitances of undoped DG MOSFETs using an explicit and analytical compact model. It monstrates that the transition from volume inversion regime to dual gate behaviour is well simulated. The model shows an accurate dependence on the silicon layer thickness,consistent withtwo dimensional numerical simulations, for both thin and thick silicon films. Whereas the current drive and transconductance are enhanced in volume inversion regime, our results show thatintrinsic capacitances present higher values as well, which may limit the high speed (delay time) behaviour of DG MOSFETs under volume inversion regime.
Resumo:
This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model.
Resumo:
This paper develop and estimates a model of demand estimation for environmental public goods which allows for consumers to learn about their preferences through consumption experiences. We develop a theoretical model of Bayesian updating, perform comparative statics over the model, and show how the theoretical model can be consistently incorporated into a reduced form econometric model. We then estimate the model using data collected for two environmental goods. We find that the predictions of the theoretical exercise that additional experience makes consumers more certain over their preferences in both mean and variance are supported in each case.