17 resultados para Errors-in-variables model

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The β-Amyloid (βA) peptide is the major component of senile plaques that are one of the hallmarks of Alzheimer’s Disease (AD). It is well recognized that Aβ exists in multiple assembly states, such as soluble oligomers or insoluble fibrils, which affect neuronal viability and may contribute to disease progression. In particular, common βA-neurotoxic mechanisms are Ca2+ dyshomeostasis, reactive oxygen species (ROS) formation, altered signaling, mitochondrial dysfunction and neuronal death such as necrosis and apoptosis. Recent study shows that the ubiquitin-proteasome pathway play a crucial role in the degradation of short-lived and regulatory proteins that are important in a variety of basic and pathological cellular processes including apoptosis. Guanosine (Guo) is a purine nucleoside present extracellularly in brain that shows a spectrum of biological activities, both under physiological and pathological conditions. Recently it has become recognized that both neurons and glia also release guanine-based purines. However, the role of Guo in AD is still not well established. In this study, we investigated the machanism basis of neuroprotective effects of GUO against Aβ peptide-induced toxicity in neuronal (SH-SY5Y), in terms of mitochondrial dysfunction and translocation of phosphatidylserine (PS), a marker of apoptosis, using MTT and Annexin-V assay, respectively. In particular, treatment of SH-SY5Y cells with GUO (12,5-75 μM) in presence of monomeric βA25-35 (neurotoxic core of Aβ), oligomeric and fibrillar βA1-42 peptides showed a strong dose-dependent inhibitory effects on βA-induced toxic events. The maximum inhibition of mitochondrial function loss and PS translocation was observed with 75 μM of Guo. Subsequently, to investigate whether neuroprotection of Guo can be ascribed to its ability to modulate proteasome activity levels, we used lactacystin, a specific inhibitor of proteasome. We found that the antiapoptotic effects of Guo were completely abolished by lactacystin. To rule out the possibility that this effects resulted from an increase in proteasome activity by Guo, the chymotrypsin-like activity was assessed employing the fluorogenic substrate Z-LLL-AMC. The treatment of SH-SY5Y with Guo (75 μM for 0-6 h) induced a strong increase, in a time-dependent manner, of proteasome activity. In parallel, no increase of ubiquitinated protein levels was observed at similar experimental conditions adopted. We then evaluated an involvement of anti and pro-apoptotic proteins such as Bcl-2, Bad and Bax by western blot analysis. Interestingly, Bax levels decreased after 2 h treatment of SH-SY5Y with Guo. Taken together, these results demonstrate that Guo neuroprotective effects against βA-induced apoptosis are mediated, at least partly, via proteasome activation. In particular, these findings suggest a novel neuroprotective pathway mediated by Guo, which involves a rapid degradation of pro-apoptotic proteins by the proteasome. In conclusion, the present data, raise the possibility that Guo could be used as an agent for the treatment of AD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present study we analyzed new neuroprotective therapeutical strategies in PD (Parkinson’s disease) and AD (Alzheimer’s disease). Current therapeutic strategies for treating PD and AD offer mainly transient symptomatic relief but it is still impossible to block the loss of neuron and then the progression of PD and AD. There is considerable consensus that the increased production and/or aggregation of α- synuclein (α-syn) and β-amyloid peptide (Aβ), plays a central role in the pathogenesis of PD, related synucleinopathies and AD. Therefore, we identified antiamyloidogenic compounds and we tested their effect as neuroprotective drug-like molecules against α-syn and β-amyloid cytotoxicity in PC12. Herein, we show that two nitro-catechol compounds (entacapone and tolcapone) and 5 cathecol-containing compounds (dopamine, pyrogallol, gallic acid, caffeic acid and quercetin) with antioxidant and anti-inflammatory properties, are potent inhibitors of α-syn and β-amyloid oligomerization and fibrillization. Subsequently, we show that the inhibition of α-syn and β-amyloid oligomerization and fibrillization is correlated with the neuroprotection of these compounds against the α-syn and β-amyloid-induced cytotoxicity in PC12. Finally, we focused on the study of the neuroprotective role of microglia and on the possibility that the neuroprotection properties of these cells could be use as therapeutical strategy in PD and AD. Here, we have used an in vitro model to demonstrate neuroprotection of a 48 h-microglial conditioned medium (MCM) towards cerebellar granule neurons (CGNs) challenged with the neurotoxin 6-hydroxydopamine (6-OHDA), which induces a Parkinson-like neurodegeneration, with Aβ42, which induces a Alzheimer-like neurodegeneration, and glutamate, involved in the major neurodegenerative diseases. We show that MCM nearly completely protects CGNs from 6-OHDA neurotoxicity, partially from glutamate excitotoxicity but not from Aβ42 toxin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During my PhD,I have been develop an innovative technique to reproduce in vitro the 3D thymic microenvironment, to be used for growth and differentiation of thymocytes, and possible transplantation replacement in conditions of depressed thymic immune regulation. The work has been developed in the laboratory of Tissue Engineering at the University Hospital in Basel, Switzerland, under the tutorship of Prof.Ivan Martin. Since a number of studies have suggested that the 3D structure of the thymic microenvironment might play a key role in regulating the survival and functional competence of thymocytes, I’ve focused my effort on the isolation and purification of the extracellular matrix of the mouse thymus. Specifically, based on the assumption that TEC can favour the differentiation of pre-T lymphocytes, I’ve developed a specific decellularization protocol to obtain the intact, DNA-free extracellular matrix of the adult mouse thymus. Two different protocols satisfied the main characteristics of a decellularized matrix, according to qualitative and quantitative assays. In particular, the quantity of DNA was less than 10% in absolute value, no positive staining for cells was found and the 3D structure and composition of the ECM were maintained. In addition, I was able to prove that the decellularized matrixes were not cytotoxic for the cells themselves, and were able to increase expression of MHC II antigens compared to control cells grown in standard conditions. I was able to prove that TECs grow and proliferate up to ten days on top the decellularized matrix. After a complete characterization of the culture system, these innovative natural scaffolds could be used to improve the standard culture conditions of TEC, to study in vitro the action of different factors on their differentiation genes, and to test the ability of TECs to induce in vitro maturation of seeded T lymphocytes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic nicotine delivery systems (ENDS) use has recently grown. E-cig generates carcinogenic chemical compounds and reactive oxygen species (ROS). Carbonyls and ROS are formed when the liquid comes into contact with the heating element. In this study the chemical and biological effects of coil resistance applied on the same device were investigated. A preliminary in-vivo study the new heat-not-burn devices (IQOS®) has been conducted to evaluate the effect of the device on antioxidant biomarkers. The amount of formaldehyde, acetaldehyde, acrolein was measured by GC-MS analysis. The two e-liquids used for carbonyls detection differed only for the presence of nicotine. The nicotine-free liquid was then used for the detection of ROS in the aerosol. The impact of the non-nicotine vapor on cell viability in H1299 human lung carcinoma cells, as well as the biological effects in a rat model of e-cig aerosol exposure, were also evaluated. After the exposure of Sprague Dawley rats to e-cig and IQOS® aerosol, the effect of 28-day treatment was examined on enzymatic and non-enzymatic antioxidant response, lung inflammation, blood homeostasis and tissue damage by using scanning electron microscope (SEM) technique. The results show a significant correlation between the low resistance and the generation of higher concentrations of the selected carbonyls and ROS in aerosols. Cell viability was reduced with an inverse relation to coil resistance. The experimental model highlighted an impairment of the pulmonary antioxidant and detoxifying machinery. Frames from SEM show disorganization of alveolar and bronchial epithelium. IQOS® exposed animals shows a significant production of ROS related to the unbalance of antioxidant defense and alteration of macromolecule integrity. This research demonstrates how several toxicological aspects can potentially occur in e-cig consumers who use low resistance device coupled with nicotine-free liquid. ENDS may expose users to hazardous compounds, which, may promote chronic pathologies and degenerative diseases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gliomas are one of the most frequent primary malignant brain tumors. Acquisition of stem-like features likely contributes to the malignant nature of high-grade gliomas and may be responsible for the initiation, growth, and recurrence of these tumors. In this regard, although the traditional 2D cell culture system has been widely used in cancer research, it shows limitations in maintaining the stemness properties of cancer and in mimicking the in vivo microenvironment. In order to overcome these limitations, different three-dimensional (3D) culture systems have been developed to mimic better the tumor microenvironment. Cancer cells cultured in 3D structures may represent a more reliable in vitro model due to increased cell-cell and cell-extracellular matrix (ECM) interaction. Several attempts to recreate brain cancer tissue in vitro are described in literature. However, to date, it is still unclear which main characteristics the ideal model should reproduce. The overall goal of this project was the development of a 3D in vitro model able to reproduce the brain ECM microenvironment and to recapitulate pathological condition for the study of tumor stroma interactions, tumor invasion ability, and molecular phenotype of glioma cells. We performed an in silico bioinformatic analysis using GEPIA2 Software to compare the expression level of seven matrix protein in the LGG tumors with healthy tissues. Then, we carried out a FFPE retrospective study in order to evaluate the percentage of expression of selected proteins. Thus, we developed a 3D scaffold composed by Hyaluronic Acid and Collagen IV in a ratio of 50:50. We used two astrocytoma cell lines, HTB-12 and HTB-13. In conclusion, we developed an in vitro 3D model able to reproduce the composition of brain tumor ECM, demonstrating that it is a feasible platform to investigate the interaction between tumor cells and the matrix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wave breaking is an important coastal process, influencing hydro-morphodynamic processes such as turbulence generation and wave energy dissipation, run-up on the beach and overtopping of coastal defence structures. During breaking, waves are complex mixtures of air and water (“white water”) whose properties affect velocity and pressure fields in the vicinity of the free surface and, depending on the breaker characteristics, different mechanisms for air entrainment are usually observed. Several laboratory experiments have been performed to investigate the role of air bubbles in the wave breaking process (Chanson & Cummings, 1994, among others) and in wave loading on vertical wall (Oumeraci et al., 2001; Peregrine et al., 2006, among others), showing that the air phase is not negligible since the turbulent energy dissipation involves air-water mixture. The recent advancement of numerical models has given valuable insights in the knowledge of wave transformation and interaction with coastal structures. Among these models, some solve the RANS equations coupled with a free-surface tracking algorithm and describe velocity, pressure, turbulence and vorticity fields (Lara et al. 2006 a-b, Clementi et al., 2007). The single-phase numerical model, in which the constitutive equations are solved only for the liquid phase, neglects effects induced by air movement and trapped air bubbles in water. Numerical approximations at the free surface may induce errors in predicting breaking point and wave height and moreover, entrapped air bubbles and water splash in air are not properly represented. The aim of the present thesis is to develop a new two-phase model called COBRAS2 (stands for Cornell Breaking waves And Structures 2 phases), that is the enhancement of the single-phase code COBRAS0, originally developed at Cornell University (Lin & Liu, 1998). In the first part of the work, both fluids are considered as incompressible, while the second part will treat air compressibility modelling. The mathematical formulation and the numerical resolution of the governing equations of COBRAS2 are derived and some model-experiment comparisons are shown. In particular, validation tests are performed in order to prove model stability and accuracy. The simulation of the rising of a large air bubble in an otherwise quiescent water pool reveals the model capability to reproduce the process physics in a realistic way. Analytical solutions for stationary and internal waves are compared with corresponding numerical results, in order to test processes involving wide range of density difference. Waves induced by dam-break in different scenarios (on dry and wet beds, as well as on a ramp) are studied, focusing on the role of air as the medium in which the water wave propagates and on the numerical representation of bubble dynamics. Simulations of solitary and regular waves, characterized by both spilling and plunging breakers, are analyzed with comparisons with experimental data and other numerical model in order to investigate air influence on wave breaking mechanisms and underline model capability and accuracy. Finally, modelling of air compressibility is included in the new developed model and is validated, revealing an accurate reproduction of processes. Some preliminary tests on wave impact on vertical walls are performed: since air flow modelling allows to have a more realistic reproduction of breaking wave propagation, the dependence of wave breaker shapes and aeration characteristics on impact pressure values is studied and, on the basis of a qualitative comparison with experimental observations, the numerical simulations achieve good results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Satellite SAR (Synthetic Aperture Radar) interferometry represents a valid technique for digital elevation models (DEM) generation, providing metric accuracy even without ancillary data of good quality. Depending on the situations the interferometric phase could be interpreted both as topography and as a displacement eventually occurred between the two acquisitions. Once that these two components have been separated it is possible to produce a DEM from the first one or a displacement map from the second one. InSAR DEM (Digital Elevation Model) generation in the cryosphere is not a straightforward operation because almost every interferometric pair contains also a displacement component, which, even if small, when interpreted as topography during the phase to height conversion step could introduce huge errors in the final product. Considering a glacier, assuming the linearity of its velocity flux, it is therefore necessary to differentiate at least two pairs in order to isolate the topographic residue only. In case of an ice shelf the displacement component in the interferometric phase is determined not only by the flux of the glacier but also by the different heights of the two tides. As a matter of fact even if the two scenes of the interferometric pair are acquired at the same time of the day only the main terms of the tide disappear in the interferogram, while the other ones, smaller, do not elide themselves completely and so correspond to displacement fringes. Allowing for the availability of tidal gauges (or as an alternative of an accurate tidal model) it is possible to calculate a tidal correction to be applied to the differential interferogram. It is important to be aware that the tidal correction is applicable only knowing the position of the grounding line, which is often a controversial matter. In this thesis it is described the methodology applied for the generation of the DEM of the Drygalski ice tongue in Northern Victoria Land, Antarctica. The displacement has been determined both in an interferometric way and considering the coregistration offsets of the two scenes. A particular attention has been devoted to investigate the importance of the role of some parameters, such as timing annotations and orbits reliability. Results have been validated in a GIS environment by comparison with GPS displacement vectors (displacement map and InSAR DEM) and ICEsat GLAS points (InSAR DEM).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present work concerns with the study of debris flows and, in particular, with the related hazard in the Alpine Environment. During the last years several methodologies have been developed to evaluate hazard associated to such a complex phenomenon, whose velocity, impacting force and inappropriate temporal prediction are responsible of the related high hazard level. This research focuses its attention on the depositional phase of debris flows through the application of a numerical model (DFlowz), and on hazard evaluation related to watersheds morphometric, morphological and geological characterization. The main aims are to test the validity of DFlowz simulations and assess sources of errors in order to understand how the empirical uncertainties influence the predictions; on the other side the research concerns with the possibility of performing hazard analysis starting from the identification of susceptible debris flow catchments and definition of their activity level. 25 well documented debris flow events have been back analyzed with the model DFlowz (Berti and Simoni, 2007): derived form the implementation of the empirical relations between event volume and planimetric and cross section inundated areas, the code allows to delineate areas affected by an event by taking into account information about volume, preferential flow path and digital elevation model (DEM) of fan area. The analysis uses an objective methodology for evaluating the accuracy of the prediction and involve the calibration of the model based on factors describing the uncertainty associated to the semi empirical relationships. The general assumptions on which the model is based have been verified although the predictive capabilities are influenced by the uncertainties of the empirical scaling relationships, which have to be necessarily taken into account and depend mostly on errors concerning deposited volume estimation. In addition, in order to test prediction capabilities of physical-based models, some events have been simulated through the use of RAMMS (RApid Mass MovementS). The model, which has been developed by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) in Birmensdorf and the Swiss Federal Institute for Snow and Avalanche Research (SLF) takes into account a one-phase approach based on Voellmy rheology (Voellmy, 1955; Salm et al., 1990). The input file combines the total volume of the debris flow located in a release area with a mean depth. The model predicts the affected area, the maximum depth and the flow velocity in each cell of the input DTM. Relatively to hazard analysis related to watersheds characterization, the database collected by the Alto Adige Province represents an opportunity to examine debris-flow sediment dynamics at the regional scale and analyze lithologic controls. With the aim of advancing current understandings about debris flow, this study focuses on 82 events in order to characterize the topographic conditions associated with their initiation , transportation and deposition, seasonal patterns of occurrence and examine the role played by bedrock geology on sediment transfer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present research aims at shedding light on the demanding puzzle characterizing the issue of child undernutrition in India. Indeed, the so called ‘Indian development paradox’ identifies the phenomenon according to which higher level of income per capita is recorded alongside a lethargic reduction in the proportion of underweight children aged below three years. Thus, in the time period occurring from 2000 to 2005, real Gross Domestic Production per capita has annually grown at 5.4%, whereas the proportion of children who are underweight has declined from 47% to 46%, a mere one point percent. Such trend opens up the space for discussing the traditionally assumed linkage between income-poverty and undernutrition as well as food intervention as the main focus of policies designed to fight child hunger. Also, it unlocks doors for evaluating the role of an alternative economic approach aiming at explaining undernutrition, such as the Capability Approach. The Capability Approach argues for widening the informational basis to account not only for resources, but also for variables related to liberties, opportunities and autonomy in pursuing what individuals value.The econometric analysis highlights the relevance of including behavioral factors when explaining child undernutrition. In particular, the ability of the mother to move freely in the community without the need of asking permission to her husband or mother-in-law is statistically significant when included in the model, which accounts also for confounding traditional variables, such as economic wealth and food security. Also, focusing on agency, results indicates the necessity of measuring autonomy in different domains and the need of improving the measurement scale for agency data, especially with regards the domain of household duties. Finally, future research is required to investigate policy venues for increasing agency in women and in the communities they live in as viable strategy for reducing the plague of child undernutrition in India.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis contemplates 4 papers and its main goal is to provide evidence on the prominent impact that behavioral analysis can play into the personnel economics domain.The research tool prevalently used in the thesis is the experimental analysis.The first paper provide laboratory evidence on how the standard screening model–based on the assumption that the pecuniary dimension represents the main workers’choice variable–fails when intrinsic motivation is introduced into the analysis.The second paper explores workers’ behavioral reactions when dealing with supervisors that may incur in errors in the assessment of their job performance.In particular,deserving agents that have exerted high effort may not be rewarded(Type-I errors)and undeserving agents that have exerted low effort may be rewarded(Type-II errors).Although a standard neoclassical model predicts both errors to be equally detrimental for effort provision,this prediction fails when tested through a laboratory experiment.Findings from this study suggest how failing to reward deserving agents is significantly more detrimental than rewarding undeserving agents.The third paper investigates the performance of two antithetic non-monetary incentive schemes on schooling achievement.The study is conducted through a field experiment.Students randomized to the main treatments have been incentivized to cooperate or to compete in order to earn additional exam points.Consistently with the theoretical model proposed in the paper,the level of effort in the competitive scheme proved to be higher than in the cooperative setting.Interestingly however,this result is characterized by a strong gender effect.The fourth paper exploits a natural experiment setting generated by the credit crunch occurred in the UK in the2007.The economic turmoil has negatively influenced the private sector,while public sector employees have not been directly hit by the crisis.This shock–through the rise of the unemployment rate and the increasing labor market uncertainty–has generated an exogenous variation in the opportunity cost of maternity leave in private sector labor force.This paper identifies the different responses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis is to apply multilevel regression model in context of household surveys. Hierarchical structure in this type of data is characterized by many small groups. In last years comparative and multilevel analysis in the field of perceived health have grown in size. The purpose of this thesis is to develop a multilevel analysis with three level of hierarchy for Physical Component Summary outcome to: evaluate magnitude of within and between variance at each level (individual, household and municipality); explore which covariates affect on perceived physical health at each level; compare model-based and design-based approach in order to establish informativeness of sampling design; estimate a quantile regression for hierarchical data. The target population are the Italian residents aged 18 years and older. Our study shows a high degree of homogeneity within level 1 units belonging from the same group, with an intraclass correlation of 27% in a level-2 null model. Almost all variance is explained by level 1 covariates. In fact, in our model the explanatory variables having more impact on the outcome are disability, unable to work, age and chronic diseases (18 pathologies). An additional analysis are performed by using novel procedure of analysis :"Linear Quantile Mixed Model", named "Multilevel Linear Quantile Regression", estimate. This give us the possibility to describe more generally the conditional distribution of the response through the estimation of its quantiles, while accounting for the dependence among the observations. This has represented a great advantage of our models with respect to classic multilevel regression. The median regression with random effects reveals to be more efficient than the mean regression in representation of the outcome central tendency. A more detailed analysis of the conditional distribution of the response on other quantiles highlighted a differential effect of some covariate along the distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although errors might foster learning, they can also be perceived as something to avoid if they are associated with negative consequences (e.g., receiving a bad grade or being mocked by classmates). Such adverse perceptions may trigger negative emotions and error-avoidance attitudes, limiting the possibility to use errors for learning. These students’ reactions may be influenced by relational and cultural aspects of errors that characterise the learning environment. Accordingly, the main aim of this research was to investigate whether relational and cultural characteristics associated with errors affect psychological mechanisms triggered by making mistakes. In the theoretical part, we described the role of errors in learning using an integrated multilevel (i.e., psychological, relational, and cultural levels of analysis) approach. Then, we presented three studies that analysed how cultural and relational error-related variables affect psychological aspects. The studies adopted a specific empirical methodology (i.e., qualitative, experimental, and correlational) and investigated different samples (i.e., teachers, primary school pupils and middle school students). Findings of study one (cultural level) highlighted errors acquire different meanings that are associated with different teachers’ error-handling strategies (e.g., supporting or penalising errors). Study two (relational level) demonstrated that teachers’ supportive error-handling strategies promote students’ perceptions of being in a positive error climate. Findings of study three (relational and psychological level) showed that positive error climate foster students’ adaptive reactions towards errors and learning outcomes. Overall, our findings indicated that different variables influence students’ learning from errors process and teachers play an important role in conveying specific meanings of errors during learning activities, dealing with students’ mistakes supportively, and establishing an error-friendly classroom environment.