936 resultados para errors-in-variables model
Resumo:
VEGF inhibition can promote renal vascular and parenchymal injury, causing proteinuria, hypertension and thrombotic microangiopathy. The mechanisms underlying these side effects are unclear. We investigated the renal effects of the administration, during 45 days, of sunitinib (Su), a VEGF receptor inhibitor, to rats with 5/6 renal ablation (Nx). Adult male Munich-Wistar rats were distributed among groups S+V, sham-operated rats receiving vehicle only; S+Su, S rats given Su, 4 mg/kg/day; Nx+V, Nx rats receiving V; and Nx+Su, Nx rats receiving Su. Su caused no change in Group S. Seven and 45 days after renal ablation, renal cortical interstitium was expanded, in association with rarefaction of peritubular capillaries. Su did not worsen hypertension, proteinuria or interstitial expansion, nor did it affect capillary rarefaction, suggesting little angiogenic activity in this model. Nx animals exhibited glomerulosclerosis (GS), which was aggravated by Su. This effect could not be explained by podocyte damage, nor could it be ascribed to tuft hypertrophy or hyperplasia. GS may have derived from organization of capillary microthrombi, frequently observed in Group Nx+Su. Treatment with Su did not reduce the fractional glomerular endothelial area, suggesting functional rather than structural cell injury. Chronic VEGF inhibition has little effect on normal rats, but can affect glomerular endothelium when renal damage is already present.
Resumo:
The gene XRCC3 (X-ray cross complementing group 3) has the task of repairing damage that occurs when there is recombination between homologous chromosomes. Repair of recombination between homologous chromosomes plays an important role in maintaining genome integrity, although it is known that double-strand breaks are the main inducers of chromosomal aberrations. Changes in the XRCC3 protein lead to an increase in errors in chromosome segregation due to defects in centrosomes, resulting in aneuploidy and other chromosomal aberrations, such as small increases in telomeres. We examined XRCC3 Thr241Met polymorphism using PCR-RFLP in 80 astrocytoma and glioblastoma samples. The individuals of the control group (N = 100) were selected from the general population of the Sao Paulo State. Odds ratio and 95%CI were calculated using a logistic regression model. Patients who had the allele Met of the XRCC3 Thr241Met polymorphism had a significantly increased risk of tumor development (odds ratio = 3.13; 95% confidence interval = 1.50-6.50). There were no significant differences in overall survival of patients. We suggest that XRCC3 Thr241Met polymorphism is involved in susceptibility for developing astrocytomas and glioblastomas.
Resumo:
This study performed an exploratory analysis of the anthropometrical and morphological muscle variables related to the one-repetition maximum (1RM) performance. In addition, the capacity of these variables to predict the force production was analyzed. 50 active males were submitted to the experimental procedures: vastus lateralis muscle biopsy, quadriceps magnetic resonance imaging, body mass assessment and 1RM test in the leg-press exercise. K-means cluster analysis was performed after obtaining the body mass, sum of the left and right quadriceps muscle cross-sectional area (Sigma CSA), percentage of the type II fibers and the 1RM performance. The number of clusters was defined a priori and then were labeled as high strength performance (HSP1RM) group and low strength performance (LSP1RM) group. Stepwise multiple regressions were performed by means of body mass, Sigma CSA, percentage of the type II fibers and clusters as predictors' variables and 1RM performance as response variable. The clusters mean +/- SD were: 292.8 +/- 52.1 kg, 84.7 +/- 17.9 kg, 19249.7 +/- 1645.5 mm(2) and 50.8 +/- 7.2% for the HSP1RM and 254.0 +/- 51.1 kg, 69.2 +/- 8.1 kg, 15483.1 +/- 1 104.8 mm(2) and 51.7 +/- 6.2 %, for the LSP1RM in the 1RM, body mass, Sigma CSA and muscle fiber type II percentage, respectively. The most important variable in the clusters division was the Sigma CSA. In addition, the Sigma CSA and muscle fiber type II percentage explained the variance in the 1RM performance (Adj R-2 = 0.35, p = 0.0001) for all participants and for the LSP1RM (Adj R-2 = 0.25, p = 0.002). For the HSP1RM, only the Sigma CSA was entered in the model and showed the highest capacity to explain the variance in the 1RM performance (Adj R-2 = 0.38, p = 0.01). As a conclusion, the muscle CSA was the most relevant variable to predict force production in individuals with no strength training background.
Resumo:
Objective:3,4-Methylenedioxymethamphetamine(MDMA), or ecstasy, is a synthetic drug used recreationally, mainly by young people. It has been suggested that MDMA has a Th cell skewing effect, in which Th1 cell activity is suppressed and Th2 cell activity is increased. Experimental allergic airway inflammation in ovalbumin (OVA)-sensitized rodents is a useful model to study Th2 response; therefore, based on the Th2 skewing effect of MDMA, we studied MDMA in a model of allergic lung inflammation in OVA-sensitized mice. Methods: We evaluated cell trafficking in the bronchoalveolar lavage fluid, blood and bone marrow; cytokine production; L-selectin expression and lung histology. We also investigated the effects of MDMA on tracheal reactivity in vitro and mast cell degranulation. Results: We found that MDMA given prior to OVA challenge in OVA-sensitized mice decreased leukocyte migration into the lung, as revealed by a lower cell count in the bronchoalveolar lavage fluid and lung histologic analysis. We also showed that MDMA decreased expression of both Th2-like cytokines (IL-4, IL-5 and IL-10) and adhesion molecules (L-selectin). Moreover, we showed that the hypothalamus-pituitary-adrenal axis is partially involved in the MDMA-induced reduction in leukocyte migration into the lung. Finally, we showed that MDMA decreased tracheal reactivity to methacholine as well as mast cell degranulation in situ. Conclusions:Thus, we report here that MDMA given prior to OVA challenge in OVA-sensitized allergic mice is able to decrease lung inflammation and airway reactivity and that hypothalamus-pituitary-adrenal axis activation is partially involved. Together, the data strongly suggest an involvement of a neuroinnmune mechanism in the effects of MDMA on lung inflammatory response and cell recruitment to the lungs of allergic animals. Copyright (C) 2012 S. Karger AG, Basel
Resumo:
Abstract Background The importance of the lung parenchyma in the pathophysiology of asthma has previously been demonstrated. Considering that nitric oxide synthases (NOS) and arginases compete for the same substrate, it is worthwhile to elucidate the effects of complex NOS-arginase dysfunction in the pathophysiology of asthma, particularly, related to distal lung tissue. We evaluated the effects of arginase and iNOS inhibition on distal lung mechanics and oxidative stress pathway activation in a model of chronic pulmonary allergic inflammation in guinea pigs. Methods Guinea pigs were exposed to repeated ovalbumin inhalations (twice a week for 4 weeks). The animals received 1400 W (an iNOS-specific inhibitor) for 4 days beginning at the last inhalation. Afterwards, the animals were anesthetized and exsanguinated; then, a slice of the distal lung was evaluated by oscillatory mechanics, and an arginase inhibitor (nor-NOHA) or vehicle was infused in a Krebs solution bath. Tissue resistance (Rt) and elastance (Et) were assessed before and after ovalbumin challenge (0.1%), and lung strips were submitted to histopathological studies. Results Ovalbumin-exposed animals presented an increase in the maximal Rt and Et responses after antigen challenge (p<0.001), in the number of iNOS positive cells (p<0.001) and in the expression of arginase 2, 8-isoprostane and NF-kB (p<0.001) in distal lung tissue. The 1400 W administration reduced all these responses (p<0.001) in alveolar septa. Ovalbumin-exposed animals that received nor-NOHA had a reduction of Rt, Et after antigen challenge, iNOS positive cells and 8-isoprostane and NF-kB (p<0.001) in lung tissue. The activity of arginase 2 was reduced only in the groups treated with nor-NOHA (p <0.05). There was a reduction of 8-isoprostane expression in OVA-NOR-W compared to OVA-NOR (p<0.001). Conclusions In this experimental model, increased arginase content and iNOS-positive cells were associated with the constriction of distal lung parenchyma. This functional alteration may be due to a high expression of 8-isoprostane, which had a procontractile effect. The mechanism involved in this response is likely related to the modulation of NF-kB expression, which contributed to the activation of the arginase and iNOS pathways. The association of both inhibitors potentiated the reduction of 8-isoprostane expression in this animal model.
Resumo:
The viscoelasticity of mammalian lung is determined by the mechanical properties and structural regulation of the airway smooth muscle (ASM). The exposure to polluted air may deteriorate these properties with harmful consequences to individual health. Formaldehyde (FA) is an important indoor pollutant found among volatile organic compounds. This pollutant permeates through the smooth muscle tissue forming covalent bonds between proteins in the extracellular matrix and intracellular protein structure changing mechanical properties of ASM and inducing asthma symptoms, such as airway hyperresponsiveness, even at low concentrations. In the experimental scenario, the mechanical effect of FA is the stiffening of the tissue, but the mechanism behind this effect is not fully understood. Thus, the aim of this study is to reproduce the mechanical behavior of the ASM, such as contraction and stretching, under FA action or not. For this, it was created a two-dimensional viscoelastic network model based on Voronoi tessellation solved using Runge-Kutta method of fourth order. The equilibrium configuration was reached when the forces in different parts of the network were equal. This model simulates the mechanical behavior of ASM through of a network of dashpots and springs. This dashpot-spring mechanical coupling mimics the composition of the actomyosin machinery of ASM through the contraction of springs to a minimum length. We hypothesized that formation of covalent bonds, due to the FA action, can be represented in the model by a simple change in the elastic constant of the springs, while the action of methacholine (MCh) reduce the equilibrium length of the spring. A sigmoid curve of tension as a function of MCh doses was obtained, showing increased tension when the muscle strip was exposed to FA. Our simulations suggest that FA, at a concentration of 0.1 ppm, can affect the elastic properties of the smooth muscle ¯bers by a factor of 120%. We also analyze the dynamic mechanical properties, observing the viscous and elastic behavior of the network. Finally, the proposed model, although simple, incorporates the phenomenology of both MCh and FA and reproduces experimental results observed with in vitro exposure of smooth muscle to FA. Thus, this new mechanical approach incorporates several well know features of the contractile system of the cells in a tissue level model. The model can also be used in different biological scales.
Resumo:
The viscoelasticity of mammalian lung is determined by the mechanical properties and structural regulation of the airway smooth muscle (ASM). The exposure to polluted air may deteriorate these properties with harmful consequences to individual health. Formaldehyde (FA) is an important indoor pollutant found among volatile organic compounds. This pollutant permeates through the smooth muscle tissue forming covalent bonds between proteins in the extracellular matrix and intracellular protein structure changing mechanical properties of ASM and inducing asthma symptoms, such as airway hyperresponsiveness, even at low concentrations. In the experimental scenario, the mechanical effect of FA is the stiffening of the tissue, but the mechanism behind this effect is not fully w1derstood. Thus, the aim of this study is to reproduce the mechanical behavior of the ASM, such as contraction and stretching, under FA action or not. For this, it was created a two-dimensional viscoelastic network model based on Voronoi tessellation solved using Runge-Kutta method of fourth order. The equilibrium configuration was reached when the forces in different parts of the network were equal. This model simulates the mechanical behavior of ASM through of a network of dashpots and springs. This dashpot-spring mechanical coupling mimics the composition of the actomyosin machinery of ASM through the contraction of springs to a minimum length. We hypothesized that formation of covalent bonds, due to the FA action, can be represented in the model by a simple change in the elastic constant of the springs, while the action of methacholinc (MCh) reduce the equilibrium length of the spring. A sigmoid curve of tension as a function of MCh doses was obtained, showing increased tension when the muscle strip was exposed to FA. Our simulations suggest that FA, at a concentration of 0.1 ppm, can affect the elastic properties of the smooth muscle fibers by a factor of 120%. We also analyze the dynamic mechanical properties, observing the viscous and elastic behavior of the network. Finally, the proposed model, although simple, ir1corporates the phenomenology of both MCh and FA and reproduces experirnental results observed with ir1 vitro exposure of smooth muscle to .FA. Thus, this new mechanical approach incorporates several well know features of the contractile system of the cells ir1 a tissue level model. The model can also be used in different biological scales.
Resumo:
Wave breaking is an important coastal process, influencing hydro-morphodynamic processes such as turbulence generation and wave energy dissipation, run-up on the beach and overtopping of coastal defence structures. During breaking, waves are complex mixtures of air and water (“white water”) whose properties affect velocity and pressure fields in the vicinity of the free surface and, depending on the breaker characteristics, different mechanisms for air entrainment are usually observed. Several laboratory experiments have been performed to investigate the role of air bubbles in the wave breaking process (Chanson & Cummings, 1994, among others) and in wave loading on vertical wall (Oumeraci et al., 2001; Peregrine et al., 2006, among others), showing that the air phase is not negligible since the turbulent energy dissipation involves air-water mixture. The recent advancement of numerical models has given valuable insights in the knowledge of wave transformation and interaction with coastal structures. Among these models, some solve the RANS equations coupled with a free-surface tracking algorithm and describe velocity, pressure, turbulence and vorticity fields (Lara et al. 2006 a-b, Clementi et al., 2007). The single-phase numerical model, in which the constitutive equations are solved only for the liquid phase, neglects effects induced by air movement and trapped air bubbles in water. Numerical approximations at the free surface may induce errors in predicting breaking point and wave height and moreover, entrapped air bubbles and water splash in air are not properly represented. The aim of the present thesis is to develop a new two-phase model called COBRAS2 (stands for Cornell Breaking waves And Structures 2 phases), that is the enhancement of the single-phase code COBRAS0, originally developed at Cornell University (Lin & Liu, 1998). In the first part of the work, both fluids are considered as incompressible, while the second part will treat air compressibility modelling. The mathematical formulation and the numerical resolution of the governing equations of COBRAS2 are derived and some model-experiment comparisons are shown. In particular, validation tests are performed in order to prove model stability and accuracy. The simulation of the rising of a large air bubble in an otherwise quiescent water pool reveals the model capability to reproduce the process physics in a realistic way. Analytical solutions for stationary and internal waves are compared with corresponding numerical results, in order to test processes involving wide range of density difference. Waves induced by dam-break in different scenarios (on dry and wet beds, as well as on a ramp) are studied, focusing on the role of air as the medium in which the water wave propagates and on the numerical representation of bubble dynamics. Simulations of solitary and regular waves, characterized by both spilling and plunging breakers, are analyzed with comparisons with experimental data and other numerical model in order to investigate air influence on wave breaking mechanisms and underline model capability and accuracy. Finally, modelling of air compressibility is included in the new developed model and is validated, revealing an accurate reproduction of processes. Some preliminary tests on wave impact on vertical walls are performed: since air flow modelling allows to have a more realistic reproduction of breaking wave propagation, the dependence of wave breaker shapes and aeration characteristics on impact pressure values is studied and, on the basis of a qualitative comparison with experimental observations, the numerical simulations achieve good results.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Satellite SAR (Synthetic Aperture Radar) interferometry represents a valid technique for digital elevation models (DEM) generation, providing metric accuracy even without ancillary data of good quality. Depending on the situations the interferometric phase could be interpreted both as topography and as a displacement eventually occurred between the two acquisitions. Once that these two components have been separated it is possible to produce a DEM from the first one or a displacement map from the second one. InSAR DEM (Digital Elevation Model) generation in the cryosphere is not a straightforward operation because almost every interferometric pair contains also a displacement component, which, even if small, when interpreted as topography during the phase to height conversion step could introduce huge errors in the final product. Considering a glacier, assuming the linearity of its velocity flux, it is therefore necessary to differentiate at least two pairs in order to isolate the topographic residue only. In case of an ice shelf the displacement component in the interferometric phase is determined not only by the flux of the glacier but also by the different heights of the two tides. As a matter of fact even if the two scenes of the interferometric pair are acquired at the same time of the day only the main terms of the tide disappear in the interferogram, while the other ones, smaller, do not elide themselves completely and so correspond to displacement fringes. Allowing for the availability of tidal gauges (or as an alternative of an accurate tidal model) it is possible to calculate a tidal correction to be applied to the differential interferogram. It is important to be aware that the tidal correction is applicable only knowing the position of the grounding line, which is often a controversial matter. In this thesis it is described the methodology applied for the generation of the DEM of the Drygalski ice tongue in Northern Victoria Land, Antarctica. The displacement has been determined both in an interferometric way and considering the coregistration offsets of the two scenes. A particular attention has been devoted to investigate the importance of the role of some parameters, such as timing annotations and orbits reliability. Results have been validated in a GIS environment by comparison with GPS displacement vectors (displacement map and InSAR DEM) and ICEsat GLAS points (InSAR DEM).
Resumo:
The present work concerns with the study of debris flows and, in particular, with the related hazard in the Alpine Environment. During the last years several methodologies have been developed to evaluate hazard associated to such a complex phenomenon, whose velocity, impacting force and inappropriate temporal prediction are responsible of the related high hazard level. This research focuses its attention on the depositional phase of debris flows through the application of a numerical model (DFlowz), and on hazard evaluation related to watersheds morphometric, morphological and geological characterization. The main aims are to test the validity of DFlowz simulations and assess sources of errors in order to understand how the empirical uncertainties influence the predictions; on the other side the research concerns with the possibility of performing hazard analysis starting from the identification of susceptible debris flow catchments and definition of their activity level. 25 well documented debris flow events have been back analyzed with the model DFlowz (Berti and Simoni, 2007): derived form the implementation of the empirical relations between event volume and planimetric and cross section inundated areas, the code allows to delineate areas affected by an event by taking into account information about volume, preferential flow path and digital elevation model (DEM) of fan area. The analysis uses an objective methodology for evaluating the accuracy of the prediction and involve the calibration of the model based on factors describing the uncertainty associated to the semi empirical relationships. The general assumptions on which the model is based have been verified although the predictive capabilities are influenced by the uncertainties of the empirical scaling relationships, which have to be necessarily taken into account and depend mostly on errors concerning deposited volume estimation. In addition, in order to test prediction capabilities of physical-based models, some events have been simulated through the use of RAMMS (RApid Mass MovementS). The model, which has been developed by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) in Birmensdorf and the Swiss Federal Institute for Snow and Avalanche Research (SLF) takes into account a one-phase approach based on Voellmy rheology (Voellmy, 1955; Salm et al., 1990). The input file combines the total volume of the debris flow located in a release area with a mean depth. The model predicts the affected area, the maximum depth and the flow velocity in each cell of the input DTM. Relatively to hazard analysis related to watersheds characterization, the database collected by the Alto Adige Province represents an opportunity to examine debris-flow sediment dynamics at the regional scale and analyze lithologic controls. With the aim of advancing current understandings about debris flow, this study focuses on 82 events in order to characterize the topographic conditions associated with their initiation , transportation and deposition, seasonal patterns of occurrence and examine the role played by bedrock geology on sediment transfer.
Resumo:
The present research aims at shedding light on the demanding puzzle characterizing the issue of child undernutrition in India. Indeed, the so called ‘Indian development paradox’ identifies the phenomenon according to which higher level of income per capita is recorded alongside a lethargic reduction in the proportion of underweight children aged below three years. Thus, in the time period occurring from 2000 to 2005, real Gross Domestic Production per capita has annually grown at 5.4%, whereas the proportion of children who are underweight has declined from 47% to 46%, a mere one point percent. Such trend opens up the space for discussing the traditionally assumed linkage between income-poverty and undernutrition as well as food intervention as the main focus of policies designed to fight child hunger. Also, it unlocks doors for evaluating the role of an alternative economic approach aiming at explaining undernutrition, such as the Capability Approach. The Capability Approach argues for widening the informational basis to account not only for resources, but also for variables related to liberties, opportunities and autonomy in pursuing what individuals value.The econometric analysis highlights the relevance of including behavioral factors when explaining child undernutrition. In particular, the ability of the mother to move freely in the community without the need of asking permission to her husband or mother-in-law is statistically significant when included in the model, which accounts also for confounding traditional variables, such as economic wealth and food security. Also, focusing on agency, results indicates the necessity of measuring autonomy in different domains and the need of improving the measurement scale for agency data, especially with regards the domain of household duties. Finally, future research is required to investigate policy venues for increasing agency in women and in the communities they live in as viable strategy for reducing the plague of child undernutrition in India.
Resumo:
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
Resumo:
This thesis is mainly concerned with a model calculation for generalized parton distributions (GPDs). We calculate vectorial- and axial GPDs for the N N and N Delta transition in the framework of a light front quark model. This requires the elaboration of a connection between transition amplitudes and GPDs. We provide the first quark model calculations for N Delta GPDs. The examination of transition amplitudes leads to various model independent consistency relations. These relations are not exactly obeyed by our model calculation since the use of the impulse approximation in the light front quark model leads to a violation of Poincare covariance. We explore the impact of this covariance breaking on the GPDs and form factors which we determine in our model calculation and find large effects. The reference frame dependence of our results which originates from the breaking of Poincare covariance can be eliminated by introducing spurious covariants. We extend this formalism in order to obtain frame independent results from our transition amplitudes.
Resumo:
The thesis contemplates 4 papers and its main goal is to provide evidence on the prominent impact that behavioral analysis can play into the personnel economics domain.The research tool prevalently used in the thesis is the experimental analysis.The first paper provide laboratory evidence on how the standard screening model–based on the assumption that the pecuniary dimension represents the main workers’choice variable–fails when intrinsic motivation is introduced into the analysis.The second paper explores workers’ behavioral reactions when dealing with supervisors that may incur in errors in the assessment of their job performance.In particular,deserving agents that have exerted high effort may not be rewarded(Type-I errors)and undeserving agents that have exerted low effort may be rewarded(Type-II errors).Although a standard neoclassical model predicts both errors to be equally detrimental for effort provision,this prediction fails when tested through a laboratory experiment.Findings from this study suggest how failing to reward deserving agents is significantly more detrimental than rewarding undeserving agents.The third paper investigates the performance of two antithetic non-monetary incentive schemes on schooling achievement.The study is conducted through a field experiment.Students randomized to the main treatments have been incentivized to cooperate or to compete in order to earn additional exam points.Consistently with the theoretical model proposed in the paper,the level of effort in the competitive scheme proved to be higher than in the cooperative setting.Interestingly however,this result is characterized by a strong gender effect.The fourth paper exploits a natural experiment setting generated by the credit crunch occurred in the UK in the2007.The economic turmoil has negatively influenced the private sector,while public sector employees have not been directly hit by the crisis.This shock–through the rise of the unemployment rate and the increasing labor market uncertainty–has generated an exogenous variation in the opportunity cost of maternity leave in private sector labor force.This paper identifies the different responses.