17 resultados para Probability of fixation
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This Thesis is devoted to the study of the optical companions of Millisecond Pulsars in Galactic Globular Clusters (GCs) as a part of a large project started at the Department of Astronomy of the Bologna University, in collaboration with other institutions (Astronomical Observatory of Cagliari and Bologna, University of Virginia), specifically dedicated to the study of the environmental effects on passive stellar evolution in galactic GCs. Globular Clusters are very efficient “Kilns” for generating exotic object, such as Millisecond Pulsars (MSP), low mass X-ray binaries(LMXB) or Blue Straggler Stars (BSS). In particular MSPs are formed in binary systems containing a Neutron Star which is spun up through mass accretion from the evolving companion (e.g. Bhattacharia & van den Heuvel 1991). The final stage of this recycling process is either the core of a peeled star (generally an Helium white dwarf) or a very light almos exhausted star, orbiting a very fast rotating Neutron Star (a MSP). Despite the large difference in total mass between the disk of the Galaxy and the Galactic GC system (up a factor 103), the percentage of fast rotating pulsar in binary systems found in the latter is very higher. MSPs in GCs show spin periods in the range 1.3 ÷ 30ms, slowdown rates ˙P 1019 s/s and a lower magnetic field, respect to ”normal” radio pulsars, B 108 gauss . The high probability of disruption of a binary systems after a supernova explosion, explain why we expect only a low percentage of recycled millisecond pulsars respect to the whole pulsar population. In fact only the 10% of the known 1800 radio pulsars are radio MSPs. Is not surprising, that MSP are overabundant in GCs respect to Galactic field, since in the Galactic Disk, MSPs can only form through the evolution of primordial binaries, and only if the binary survives to the supernova explosion which lead to the neutron star formation. On the other hand, the extremely high stellar density in the core of GCs, relative to most of the rest of the Galaxy, favors the formation of several different binary systems, suitable for the recycling of NSs (Davies at al. 1998). In this thesis we will present the properties two millisecond pulsars companions discovered in two globular clusters, the Helium white dwarf orbiting the MSP PSR 1911-5958A in NGC 6752 and the second case of a tidally deformed star orbiting an eclipsing millisecond pulsar, PSR J1701-3006B in NGC6266
Resumo:
In Performance-Based Earthquake Engineering (PBEE), evaluating the seismic performance (or seismic risk) of a structure at a designed site has gained major attention, especially in the past decade. One of the objectives in PBEE is to quantify the seismic reliability of a structure (due to the future random earthquakes) at a site. For that purpose, Probabilistic Seismic Demand Analysis (PSDA) is utilized as a tool to estimate the Mean Annual Frequency (MAF) of exceeding a specified value of a structural Engineering Demand Parameter (EDP). This dissertation focuses mainly on applying an average of a certain number of spectral acceleration ordinates in a certain interval of periods, Sa,avg (T1,…,Tn), as scalar ground motion Intensity Measure (IM) when assessing the seismic performance of inelastic structures. Since the interval of periods where computing Sa,avg is related to the more or less influence of higher vibration modes on the inelastic response, it is appropriate to speak about improved IMs. The results using these improved IMs are compared with a conventional elastic-based scalar IMs (e.g., pseudo spectral acceleration, Sa ( T(¹)), or peak ground acceleration, PGA) and the advanced inelastic-based scalar IM (i.e., inelastic spectral displacement, Sdi). The advantages of applying improved IMs are: (i ) "computability" of the seismic hazard according to traditional Probabilistic Seismic Hazard Analysis (PSHA), because ground motion prediction models are already available for Sa (Ti), and hence it is possibile to employ existing models to assess hazard in terms of Sa,avg, and (ii ) "efficiency" or smaller variability of structural response, which was minimized to assess the optimal range to compute Sa,avg. More work is needed to assess also "sufficiency" and "scaling robustness" desirable properties, which are disregarded in this dissertation. However, for ordinary records (i.e., with no pulse like effects), using the improved IMs is found to be more accurate than using the elastic- and inelastic-based IMs. For structural demands that are dominated by the first mode of vibration, using Sa,avg can be negligible relative to the conventionally-used Sa (T(¹)) and the advanced Sdi. For structural demands with sign.cant higher-mode contribution, an improved scalar IM that incorporates higher modes needs to be utilized. In order to fully understand the influence of the IM on the seismis risk, a simplified closed-form expression for the probability of exceeding a limit state capacity was chosen as a reliability measure under seismic excitations and implemented for Reinforced Concrete (RC) frame structures. This closed-form expression is partuclarly useful for seismic assessment and design of structures, taking into account the uncertainty in the generic variables, structural "demand" and "capacity" as well as the uncertainty in seismic excitations. The assumed framework employs nonlinear Incremental Dynamic Analysis (IDA) procedures in order to estimate variability in the response of the structure (demand) to seismic excitations, conditioned to IM. The estimation of the seismic risk using the simplified closed-form expression is affected by IM, because the final seismic risk is not constant, but with the same order of magnitude. Possible reasons concern the non-linear model assumed, or the insufficiency of the selected IM. Since it is impossibile to state what is the "real" probability of exceeding a limit state looking the total risk, the only way is represented by the optimization of the desirable properties of an IM.
Resumo:
Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
In the post genomic era with the massive production of biological data the understanding of factors affecting protein stability is one of the most important and challenging tasks for highlighting the role of mutations in relation to human maladies. The problem is at the basis of what is referred to as molecular medicine with the underlying idea that pathologies can be detailed at a molecular level. To this purpose scientific efforts focus on characterising mutations that hamper protein functions and by these affect biological processes at the basis of cell physiology. New techniques have been developed with the aim of detailing single nucleotide polymorphisms (SNPs) at large in all the human chromosomes and by this information in specific databases are exponentially increasing. Eventually mutations that can be found at the DNA level, when occurring in transcribed regions may then lead to mutated proteins and this can be a serious medical problem, largely affecting the phenotype. Bioinformatics tools are urgently needed to cope with the flood of genomic data stored in database and in order to analyse the role of SNPs at the protein level. In principle several experimental and theoretical observations are suggesting that protein stability in the solvent-protein space is responsible of the correct protein functioning. Then mutations that are found disease related during DNA analysis are often assumed to perturb protein stability as well. However so far no extensive analysis at the proteome level has investigated whether this is the case. Also computationally methods have been developed to infer whether a mutation is disease related and independently whether it affects protein stability. Therefore whether the perturbation of protein stability is related to what it is routinely referred to as a disease is still a big question mark. In this work we have tried for the first time to explore the relation among mutations at the protein level and their relevance to diseases with a large-scale computational study of the data from different databases. To this aim in the first part of the thesis for each mutation type we have derived two probabilistic indices (for 141 out of 150 possible SNPs): the perturbing index (Pp), which indicates the probability that a given mutation effects protein stability considering all the “in vitro” thermodynamic data available and the disease index (Pd), which indicates the probability of a mutation to be disease related, given all the mutations that have been clinically associated so far. We find with a robust statistics that the two indexes correlate with the exception of all the mutations that are somatic cancer related. By this each mutation of the 150 can be coded by two values that allow a direct comparison with data base information. Furthermore we also implement computational methods that starting from the protein structure is suited to predict the effect of a mutation on protein stability and find that overpasses a set of other predictors performing the same task. The predictor is based on support vector machines and takes as input protein tertiary structures. We show that the predicted data well correlate with the data from the databases. All our efforts therefore add to the SNP annotation process and more importantly found the relationship among protein stability perturbation and the human variome leading to the diseasome.
Resumo:
The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.
Resumo:
From the institutional point of view, the legal system of IPR (intellectual property right, hereafter, IPR) is one of incentive institutions of innovation and it plays very important role in the development of economy. According to the law, the owner of the IPR enjoy a kind of exclusive right to use his IP(intellectual property, hereafter, IP), in other words, he enjoys a kind of legal monopoly position in the market. How to well protect the IPR and at the same time to regulate the abuse of IPR is very interested topic in this knowledge-orientated market and it is the basic research question in this dissertation. In this paper, by way of comparing study and by way of law and economic analyses, and based on the Austrian Economics School’s theories, the writer claims that there is no any contradiction between the IPR and competition law. However, in this new economy (high-technology industries), there is really probability of the owner of IPR to abuse his dominant position. And with the characteristics of the new economy, such as, the high rates of innovation, “instant scalability”, network externality and lock-in effects, the IPR “will vest the dominant undertakings with the power not just to monopolize the market but to shift such power from one market to another, to create strong barriers to enter and, in so doing, granting the perpetuation of such dominance for quite a long time.”1 Therefore, in order to keep the order of market, to vitalize the competition and innovation, and to benefit the customer, in EU and US, it is common ways to apply the competition law to regulate the IPR abuse. In Austrian Economic School perspective, especially the Schumpeterian theories, the innovation/competition/monopoly and entrepreneurship are inter-correlated, therefore, we should apply the dynamic antitrust model based on the AES theories to analysis the relationship between the IPR and competition law. China is still a developing country with relative not so high ability of innovation. Therefore, at present, to protect the IPR and to make good use of the incentive mechanism of IPR legal system is the first important task for Chinese government to do. However, according to the investigation reports,2 based on their IPR advantage and capital advantage, some multinational companies really obtained the dominant or monopoly market position in some aspects of some industries, and there are some IPR abuses conducted by such multinational companies. And then, the Chinese government should be paying close attention to regulate any IPR abuse. However, how to effectively regulate the IPR abuse by way of competition law in Chinese situation, from the law and economic theories’ perspective, from the legislation perspective, and from the judicial practice perspective, there is a long way for China to go!
Resumo:
One of the ways by which the legal system has responded to different sets of problems is the blurring of the traditional boundaries of criminal law, both procedural and substantive. This study aims to explore under what conditions does this trend lead to the improvement of society's welfare by focusing on two distinguishing sanctions in criminal law, incarceration and social stigma. In analyzing how incarceration affects the incentive to an individual to violate a legal standard, we considered the crucial role of the time constraint. This aspect has not been fully explored in the literature on law and economics, especially with respect to the analysis of the beneficiality of imposing either a fine or a prison term. We observed that that when individuals are heterogeneous with respect to wealth and wage income, and when the level of activity can be considered a normal good, only the middle wage and middle income groups can be adequately deterred by a fixed fines alone regime. The existing literature only considers the case of the very poor, deemed as judgment proof. However, since imprisonment is a socially costly way to deprive individuals of their time, other alternatives may be sought such as the imposition of discriminatory monetary fine, partial incapacitation and other alternative sanctions. According to traditional legal theory, the reason why criminal law is obeyed is not mainly due to the monetary sanctions but to the stigma arising from the community’s moral condemnation that accompanies conviction or merely suspicion. However, it is not sufficiently clear whether social stigma always accompanies a criminal conviction. We addressed this issue by identifying the circumstances wherein a criminal conviction carries an additional social stigma. Our results show that social stigma is seen to accompany a conviction under the following conditions: first, when the law coincides with the society's social norms; and second, when the prohibited act provides information on an unobservable attribute or trait of an individual -- crucial in establishing or maintaining social relationships beyond mere economic relationships. Thus, even if the social planner does not impose the social sanction directly, the impact of social stigma can still be influenced by the probability of conviction and the level of the monetary fine imposed as well as the varying degree of correlation between the legal standard violated and the social traits or attributes of the individual. In this respect, criminal law serves as an institution that facilitates cognitive efficiency in the process of imposing the social sanction to the extent that the rest of society is boundedly rational and use judgment heuristics. Paradoxically, using criminal law in order to invoke stigma for the violation of a legal standard may also serve to undermine its strength. To sum, the results of our analysis reveal that the scope of criminal law is narrow both for the purposes of deterrence and cognitive efficiency. While there are certain conditions where the enforcement of criminal law may lead to an increase in social welfare, particularly with respect to incarceration and stigma, we have also identified the channels through which they could affect behavior. Since such mechanisms can be replicated in less costly ways, society should first try or seek to employ these legal institutions before turning to criminal law as a last resort.
Resumo:
During the last few years, a great deal of interest has risen concerning the applications of stochastic methods to several biochemical and biological phenomena. Phenomena like gene expression, cellular memory, bet-hedging strategy in bacterial growth and many others, cannot be described by continuous stochastic models due to their intrinsic discreteness and randomness. In this thesis I have used the Chemical Master Equation (CME) technique to modelize some feedback cycles and analyzing their properties, including experimental data. In the first part of this work, the effect of stochastic stability is discussed on a toy model of the genetic switch that triggers the cellular division, which malfunctioning is known to be one of the hallmarks of cancer. The second system I have worked on is the so-called futile cycle, a closed cycle of two enzymatic reactions that adds and removes a chemical compound, called phosphate group, to a specific substrate. I have thus investigated how adding noise to the enzyme (that is usually in the order of few hundred molecules) modifies the probability of observing a specific number of phosphorylated substrate molecules, and confirmed theoretical predictions with numerical simulations. In the third part the results of the study of a chain of multiple phosphorylation-dephosphorylation cycles will be presented. We will discuss an approximation method for the exact solution in the bidimensional case and the relationship that this method has with the thermodynamic properties of the system, which is an open system far from equilibrium.In the last section the agreement between the theoretical prediction of the total protein quantity in a mouse cells population and the observed quantity will be shown, measured via fluorescence microscopy.
Resumo:
The dissertation is structured in three parts. The first part compares US and EU agricultural policies since the end of WWII. There is not enough evidence for claiming that agricultural support has a negative impact on obesity trends. I discuss the possibility of an exchange in best practices to fight obesity. There are relevant economic, societal and legal differences between the US and the EU. However, partnerships against obesity are welcomed. The second part presents a socio-ecological model of the determinants of obesity. I employ an interdisciplinary model because it captures the simultaneous influence of several variables. Obesity is an interaction of pre-birth, primary and secondary socialization factors. To test the significance of each factor, I use data from the National Longitudinal Survey of Adolescent Health. I compare the average body mass index across different populations. Differences in means are statistically significant. In the last part I use the National Survey of Children Health. I analyze the effect that family characteristics, built environment, cultural norms and individual factors have on the body mass index (BMI). I use Ordered Probit models and I calculate the marginal effects. I use State and ethnicity fixed effects to control for unobserved heterogeneity. I find that southern US States tend have on average a higher probability of being obese. On the ethnicity side, White Americans have a lower BMI respect to Black Americans, Hispanics and American Indians Native Islanders; being Asian is associated with a lower probability of being obese. In neighborhoods where trust level and safety perception are higher, children are less overweight and obese. Similar results are shown for higher level of parental income and education. Breastfeeding has a negative impact. Higher values of measures of behavioral disorders have a positive and significant impact on obesity, as predicted by the theory.
Resumo:
Many psychophysical studies suggest that target depth and direction during reaches are processed independently, but the neurophysiological support to this view is so far limited. Here, we investigated the representation of reach depth and direction by single neurons in an area of the medial posterior parietal cortex (V6A). Single-unit activity was recorded from V6A in two Macaca fascicularis monkeys performing a fixation-to-reach task to targets at different depths and directions. We found that in a substantial percentage of V6A neurons depth and direction signals jointly influenced fixation, planning and arm movement-related activity in 3D space. While target depth and direction were equally encoded during fixation, depth tuning became stronger during arm movement planning, execution and target holding. The spatial tuning of fixation activity was often maintained across epochs, and this occurred more frequently in depth. These findings support for the first time the existence of a common neural substrate for the encoding of target depth and direction during reaching movements in the posterior parietal cortex. Present results also highlight the presence in V6A of several types of cells that process independently or jointly eye position and arm movement planning and execution signals in order to control reaches in 3D space. It is possible that depth and direction influence also the metrics of the reach action and that this effect on the reach kinematic variables can account for the spatial tuning we found in V6A neural activity. For this reason, we recorded and analyzed behavioral data when one monkey performed reaching movements in 3-D space. We evaluated how the target spatial position, in particular target depth and target direction, affected the kinematic parameters and trajectories describing the motor action properties.
Resumo:
CdTe and Cu(In,Ga)Se2 (CIGS) thin film solar cells are fabricated, electrically characterized and modelled in this thesis. We start from the fabrication of CdTe thin film devices where the R.F. magnetron sputtering system is used to deposit the CdS/CdTe based solar cells. The chlorine post-growth treatment is modified in order to uniformly cover the cell surface and reduce the probability of pinholes and shunting pathways creation which, in turn, reduces the series resistance. The deionized water etching is proposed, for the first time, as the simplest solution to optimize the effect of shunt resistance, stability and metal-semiconductor inter-diffusion at the back contact. In continue, oxygen incorporation is proposed while CdTe layer deposition. This technique has been rarely examined through R.F sputtering deposition of such devices. The above experiments are characterized electrically and optically by current-voltage characterization, scanning electron microscopy, x-ray diffraction and optical spectroscopy. Furthermore, for the first time, the degradation rate of CdTe devices over time is numerically simulated through AMPS and SCAPS simulators. It is proposed that the instability of electrical parameters is coupled with the material properties and external stresses (bias, temperature and illumination). Then, CIGS materials are simulated and characterized by several techniques such as surface photovoltage spectroscopy is used (as a novel idea) to extract the band gap of graded band gap CIGS layers, surface or bulk defect states. The surface roughness is scanned by atomic force microscopy on nanometre scale to obtain the surface topography of the film. The modified equivalent circuits are proposed and the band gap graded profiles are simulated by AMPS simulator and several graded profiles are examined in order to optimize their thickness, grading strength and electrical parameters. Furthermore, the transport mechanisms and Auger generation phenomenon are modelled in CIGS devices.
Resumo:
During recent decades, economists' interest in gender-related issues has risen. Researchers aim to show how economic theory can be applied to gender related topics such as peer effect, labor market outcomes, and education. This dissertation aims to contribute to our understandings of the interaction, inequality and sources of differences across genders, and it consists of three empirical papers in the research area of gender economics. The aim of the first paper ("Separating gender composition effect from peer effects in education") is to demonstrate the importance of considering endogenous peer effects in order to identify gender composition effect. This fact is analytically illustrated by employing Manski's (1993) linear-in-means model. The paper derives an innovative solution to the simultaneous identification of endogenous and exogenous peer effects: gender composition effect of interest is estimated from auxiliary reduced-form estimates after identifying the endogenous peer effect by using Graham (2008) variance restriction method. The paper applies this methodology to two different data sets from American and Italian schools. The motivation of the second paper ("Gender differences in vulnerability to an economic crisis") is to analyze the different effect of recent economic crisis on the labor market outcome of men and women. Using triple differences method (before-after crisis, harder-milder hit sectors, men-women) the paper used British data at the occupation level and shows that men suffer more than women in terms of probability of losing their job. Several explanations for the findings are proposed. The third paper ("Gender gap in educational outcome") is concerned with a controversial academic debate on the existence, degree and origin of the gender gap in test scores. The existence of a gap both in mean scores and the variability around the mean is documented and analyzed. The origins of the gap are investigated by looking at wide range of possible explanations.
Resumo:
The objective of this study is to measure the impact of the national subsidy scheme on the olive and fruit sector in two regions of Albania, Shkodra and Fier. From the methodological point of view, we use a non- parametric approach based on the propensity score matching. This method overcomes problem of the missing data, by creating a counterfactual scenario. In the first step, the conditional probability to participate in the program was computed. Afterwards, different matching estimators were applied to establish whether the subsidies have affected sectors performance. One of the strengths of this study stays in the data. Cross-sectional primary data was gathered through about 250 interviews.. We have not found empirical evidence of significant effects of government aid program on production. Differences in production found between beneficiaries and non-beneficiaries disappear after adjustment by the conditional probability of participating into the program. This suggests that subsidized farmers would have performed better than the subsidized households even in the absence of production grants, revealing program self-selection. On the other hand, the scheme has affected positively the farm structure increasing the area under cultivation, but yields has not increased for beneficiaries compared to non beneficiaries. These combined results shed light on the reason of the missed impact. It could be reasonable to believe that the new plantation, in particular in the case of olives, has not yet reached full production. Therefore, we have reasons to believe on positive impacts in the future. Concerning some qualitative results, the extension of area under cultivation is strongly conditioned by the small farm size. This together with a thin land market makes extremely difficult the expansion beyond farm boundaries.