10 resultados para Biases

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The humans process the numbers in a similar way to animals. There are countless studies in which similar performance between animals and humans (adults and/or children) are reported. Three models have been developed to explain the cognitive mechanisms underlying the number processing. The triple-code model (Dehaene, 1992) posits an mental number line as preferred way to represent magnitude. The mental number line has three particular effects: the distance, the magnitude and the SNARC effects. The SNARC effect shows a spatial association between number and space representations. In other words, the small numbers are related to left space while large numbers are related to right space. Recently a vertical SNARC effect has been found (Ito & Hatta, 2004; Schwarz & Keus, 2004), reflecting a space-related bottom-to-up representation of numbers. The magnitude representations horizontally and vertically could influence the subject performance in explicit and implicit digit tasks. The goal of this research project aimed to investigate the spatial components of number representation using different experimental designs and tasks. The experiment 1 focused on horizontal and vertical number representations in a within- and between-subjects designs in a parity and magnitude comparative tasks, presenting positive or negative Arabic digits (1-9 without 5). The experiment 1A replied the SNARC and distance effects in both spatial arrangements. The experiment 1B showed an horizontal reversed SNARC effect in both tasks while a vertical reversed SNARC effect was found only in comparative task. In the experiment 1C two groups of subjects performed both tasks in two different instruction-responding hand assignments with positive numbers. The results did not show any significant differences between two assignments, even if the vertical number line seemed to be more flexible respect to horizontal one. On the whole the experiment 1 seemed to demonstrate a contextual (i.e. task set) influences of the nature of the SNARC effect. The experiment 2 focused on the effect of horizontal and vertical number representations on spatial biases in a paper-and-pencil bisecting tasks. In the experiment 2A the participants were requested to bisect physical and number (2 or 9) lines horizontally and vertically. The findings demonstrated that digit 9 strings tended to generate a more rightward bias comparing with digit 2 strings horizontally. However in vertical condition the digit 2 strings generated a more upperward bias respect to digit 9 strings, suggesting a top-to-bottom number line. In the experiment 2B the participants were asked to bisect lines flanked by numbers (i.e. 1 or 7) in four spatial arrangements: horizontal, vertical, right-diagonal and left-diagonal lines. Four number conditions were created according to congruent or incongruent number line representation: 1-1, 1-7, 7-1 and 7-7. The main results showed a more reliable rightward bias in horizontal congruent condition (1-7) respect to incongruent condition (7-1). Vertically the incongruent condition (1-7) determined a significant bias towards bottom side of line respect to congruent condition (7-1). The experiment 2 suggested a more rigid horizontal number line while in vertical condition the number representation could be more flexible. In the experiment 3 we adopted the materials of experiment 2B in order to find a number line effect on temporal (motor) performance. The participants were presented horizontal, vertical, rightdiagonal and left-diagonal lines flanked by the same digits (i.e. 1-1 or 7-7) or by different digits (i.e. 1-7 or 7-1). The digits were spatially congruent or incongruent with their respective hypothesized mental representations. Participants were instructed to touch the lines either close to the large digit, or close to the small digit, or to bisected the lines. Number processing influenced movement execution more than movement planning. Number congruency influenced spatial biases mostly along the horizontal but also along the vertical dimension. These results support a two-dimensional magnitude representation. Finally, the experiment 4 addressed the visuo-spatial manipulation of number representations for accessing and retrieval arithmetic facts. The participants were requested to perform a number-matching and an addition verification tasks. The findings showed an interference effect between sum-nodes and neutral-nodes only with an horizontal presentation of digit-cues, in number-matching tasks. In the addition verification task, the performance was similar for horizontal and vertical presentations of arithmetic problems. In conclusion the data seemed to show an automatic activation of horizontal number line also used to retrieval arithmetic facts. The horizontal number line seemed to be more rigid and the preferred way to order number from left-to-right. A possible explanation could be the left-to-right direction for reading and writing. The vertical number line seemed to be more flexible and more dependent from the tasks, reflecting perhaps several example in the environment representing numbers either from bottom-to-top or from top-to-bottom. However the bottom-to-top number line seemed to be activated by explicit task demands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis focuses on the limits that may prevent an entrepreneur from maximizing her value, and the benefits of diversification in reducing her cost of capital. After reviewing all relevant literature dealing with the differences between traditional corporate finance and entrepreneurial finance, we focus on the biases occurring when traditional finance techniques are applied to the entrepreneurial context. In particular, using the portfolio theory framework, we determine the degree of under-diversification of entrepreneurs. Borrowing the methodology developed by Kerins et al. (2004), we test a model for the cost of capital according to the firms' industry and the entrepreneur's wealth commitment to the firm. This model takes three market inputs (standard deviation of market returns, expected return of the market, and risk-free rate), and two firm-specific inputs (standard deviation of the firm returns and correlation between firm and market returns) as parameters, and returns an appropriate cost of capital as an output. We determine the expected market return and the risk-free rate according to the huge literature on the market risk premium. As for the market return volatility, it is estimated considering a GARCH specification for the market index returns. Furthermore, we assume that the firm-specific inputs can be obtained considering new-listed firms similar in risk to the firm we are evaluating. After we form a database including all the data needed for our analysis, we perform an empirical investigation to understand how much of the firm's total risk depends on market risk, and which explanatory variables can explain it. Our results show that cost of capital declines as the level of entrepreneur's commitment decreases. Therefore, maximizing the value for the entrepreneur depends on the fraction of entrepreneur's wealth invested in the firm and the fraction she sells to outside investors. These results are interesting both for entrepreneurs and policy makers: the former can benefit from an unbiased model for their valuation; the latter can obtain some guidelines to overcome the recent financial market crisis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quest for universal memory is driving the rapid development of memories with superior all-round capabilities in non-volatility, high speed, high endurance and low power. The memory subsystem accounts for a significant cost and power budget of a computer system. Current DRAM-based main memory systems are starting to hit the power and cost limit. To resolve this issue the industry is improving existing technologies such as Flash and exploring new ones. Among those new technologies is the Phase Change Memory (PCM), which overcomes some of the shortcomings of the Flash such as durability and scalability. This alternative non-volatile memory technology, which uses resistance contrast in phase-change materials, offers more density relative to DRAM, and can help to increase main memory capacity of future systems while remaining within the cost and power constraints. Chalcogenide materials can suitably be exploited for manufacturing phase-change memory devices. Charge transport in amorphous chalcogenide-GST used for memory devices is modeled using two contributions: hopping of trapped electrons and motion of band electrons in extended states. Crystalline GST exhibits an almost Ohmic I(V) curve. In contrast amorphous GST shows a high resistance at low biases while, above a threshold voltage, a transition takes place from a highly resistive to a conductive state, characterized by a negative differential-resistance behavior. A clear and complete understanding of the threshold behavior of the amorphous phase is fundamental for exploiting such materials in the fabrication of innovative nonvolatile memories. The type of feedback that produces the snapback phenomenon is described as a filamentation in energy that is controlled by electron–electron interactions between trapped electrons and band electrons. The model thus derived is implemented within a state-of-the-art simulator. An analytical version of the model is also derived and is useful for discussing the snapback behavior and the scaling properties of the device.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dealing with latent constructs (loaded by reflective and congeneric measures) cross-culturally compared means studying how these unobserved variables vary, and/or covary each other, after controlling for possibly disturbing cultural forces. This yields to the so-called ‘measurement invariance’ matter that refers to the extent to which data collected by the same multi-item measurement instrument (i.e., self-reported questionnaire of items underlying common latent constructs) are comparable across different cultural environments. As a matter of fact, it would be unthinkable exploring latent variables heterogeneity (e.g., latent means; latent levels of deviations from the means (i.e., latent variances), latent levels of shared variation from the respective means (i.e., latent covariances), levels of magnitude of structural path coefficients with regard to causal relations among latent variables) across different populations without controlling for cultural bias in the underlying measures. Furthermore, it would be unrealistic to assess this latter correction without using a framework that is able to take into account all these potential cultural biases across populations simultaneously. Since the real world ‘acts’ in a simultaneous way as well. As a consequence, I, as researcher, may want to control for cultural forces hypothesizing they are all acting at the same time throughout groups of comparison and therefore examining if they are inflating or suppressing my new estimations with hierarchical nested constraints on the original estimated parameters. Multi Sample Structural Equation Modeling-based Confirmatory Factor Analysis (MS-SEM-based CFA) still represents a dominant and flexible statistical framework to work out this potential cultural bias in a simultaneous way. With this dissertation I wanted to make an attempt to introduce new viewpoints on measurement invariance handled under covariance-based SEM framework by means of a consumer behavior modeling application on functional food choices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The body is represented in the brain at levels that incorporate multisensory information. This thesis focused on interactions between vision and cutaneous sensations (i.e., touch and pain). Experiment 1 revealed that there are partially dissociable pathways for visual enhancement of touch (VET) depending upon whether one sees one’s own body or the body of another person. This indicates that VET, a seeming low-level effect on spatial tactile acuity, is actually sensitive to body identity. Experiments 2-4 explored the effect of viewing one’s own body on pain perception. They demonstrated that viewing the body biases pain intensity judgments irrespective of actual stimulus intensity, and, more importantly, reduces the discriminative capacities of the nociceptive pathway encoding noxious stimulus intensity. The latter effect only occurs if the pain-inducing event itself is not visible, suggesting that viewing the body alone and viewing a stimulus event on the body have distinct effects on cutaneous sensations. Experiment 5 replicated an enhancement of visual remapping of touch (VRT) when viewing fearful human faces being touched, and further demonstrated that VRT does not occur for observed touch on non-human faces, even fearful ones. This suggests that the facial expressions of non-human animals may not be simulated within the somatosensory system of the human observer in the same way that the facial expressions of other humans are. Finally, Experiment 6 examined the enfacement illusion, in which synchronous visuo-tactile inputs cause another’s face to be assimilated into the mental self-face representation. The strength of enfacement was not affected by the other’s facial expression, supporting an asymmetric relationship between processing of facial identity and facial expressions. Together, these studies indicate that multisensory representations of the body in the brain link low-level perceptual processes with the perception of emotional cues and body/face identity, and interact in complex ways depending upon contextual factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Una stampa libera e plurale è un elemento fondante di ogni sistema democratico ed è fondamentale per la creazione di un’opinione pubblica informata e in grado di esercitare controllo e pressione sulle classi dirigenti. Dal momento della loro creazione i giornali si sono imposti come un’importantissima fonte di informazione per l’opinione pubblica. La seconda metà del Novecento, inoltre, ha conosciuto innovazioni tecnologiche che hanno portato grandi cambiamenti nel ruolo della carta stampata come veicolo di trasmissione delle notizie. Partendo dalla diffusione della televisione fino ad arrivare alla rivoluzione digitale degli anni ’90 e 2000, la velocità di creazione e di trasmissione delle informazioni è aumentata esponenzialmente, i costi di produzione e di acquisizione delle notizie sono crollati e una quantità enorme di dati, che possono fornire moltissime informazioni relative alle idee e ai contenuti proposti dai diversi autori nel corso del tempo, è ora a disposizione di lettori e ricercatori. Tuttavia, anche se grazie alla rivoluzione digitale i costi materiali dei periodici si sono notevolmente ridotti, la produzione di notizie comporta altre spese e pertanto si inserisce in un contesto di mercato, sottoposto alle logiche della domanda e dell'offerta. In questo lavoro verrà analizzato il ruolo della domanda e della non perfetta razionalità dei lettori nel mercato delle notizie, partendo dall’assunto che la differenza di opinioni dei consumatori spinge le testate a regolare l’offerta di contenuti, per venire incontro alla domanda di mercato, per verificare l’applicabilità del modello utilizzato (Mullainhatan e Shleifer, 2005) al contesto italiano. A tale scopo si è analizzato il comportamento di alcuni quotidiani nazionali in occasione di due eventi che hanno profondamente interessato l'opinione pubblica italiana: il fenomeno dei flussi migratori provenienti dalla sponda sud del Mediterraneo nel mese di ottobre 2013 e l'epidemia di influenza H1N1 del 2009.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The kinematics is a fundamental tool to infer the dynamical structure of galaxies and to understand their formation and evolution. Spectroscopic observations of gas emission lines are often used to derive rotation curves and velocity dispersions. It is however difficult to disentangle these two quantities in low spatial-resolution data because of beam smearing. In this thesis, we present 3D-Barolo, a new software to derive the gas kinematics of disk galaxies from emission-line data-cubes. The code builds tilted-ring models in the 3D observational space and compares them with the actual data-cubes. 3D-Barolo works with data at a wide range of spatial resolutions without being affected by instrumental biases. We use 3D-Barolo to derive rotation curves and velocity dispersions of several galaxies in both the local and the high-redshift Universe. We run our code on HI observations of nearby galaxies and we compare our results with 2D traditional approaches. We show that a 3D approach to the derivation of the gas kinematics has to be preferred to a 2D approach whenever a galaxy is resolved with less than about 20 elements across the disk. We moreover analyze a sample of galaxies at z~1, observed in the H-alpha line with the KMOS/VLT spectrograph. Our 3D modeling reveals that the kinematics of these high-z systems is comparable to that of local disk galaxies, with steeply-rising rotation curves followed by a flat part and H-alpha velocity dispersions of 15-40 km/s over the whole disks. This evidence suggests that disk galaxies were already fully settled about 7-8 billion years ago. In summary, 3D-Barolo is a powerful and robust tool to separate physical and instrumental effects and to derive a reliable kinematics. The analysis of large samples of galaxies at different redshifts with 3D-Barolo will provide new insights on how galaxies assemble and evolve throughout cosmic time.