13 resultados para Multiple-trait model

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of the thesi is to formulate a suitable Item Response Theory (IRT) based model to measure HRQoL (as latent variable) using a mixed responses questionnaire and relaxing the hypothesis of normal distributed latent variable. The new model is a combination of two models already presented in literature, that is, a latent trait model for mixed responses and an IRT model for Skew Normal latent variable. It is developed in a Bayesian framework, a Markov chain Monte Carlo procedure is used to generate samples of the posterior distribution of the parameters of interest. The proposed model is test on a questionnaire composed by 5 discrete items and one continuous to measure HRQoL in children, the EQ-5D-Y questionnaire. A large sample of children collected in the schools was used. In comparison with a model for only discrete responses and a model for mixed responses and normal latent variable, the new model has better performances, in term of deviance information criterion (DIC), chain convergences times and precision of the estimates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Carbon fluxes and allocation pattern, and their relationship with the main environmental and physiological parameters, were studied in an apple orchard for one year (2010). I combined three widely used methods: eddy covariance, soil respiration and biometric measurements, and I applied a measurement protocol allowing a cross-check between C fluxes estimated using different methods. I attributed NPP components to standing biomass increment, detritus cycle and lateral export. The influence of environmental and physiological parameters on NEE, GPP and Reco was analyzed with a multiple regression model approach. I found that both NEP and GPP of the apple orchard were of similar magnitude to those of forests growing in similar climate conditions, while large differences occurred in the allocation pattern and in the fate of produced biomass. Apple production accounted for 49% of annual NPP, organic material (leaves, fine root litter, pruned wood and early fruit drop) contributing to detritus cycle was 46%, and only 5% went to standing biomass increment. The carbon use efficiency (CUE), with an annual average of 0.68 ± 0.10, was higher than the previously suggested constant values of 0.47-0.50. Light and leaf area index had the strongest influence on both NEE and GPP. On a diurnal basis, NEE and GPP reached their peak approximately at noon, while they appeared to be limited by high values of VPD and air temperature in the afternoon. The proposed models can be used to explain and simulate current relations between carbon fluxes and environmental parameters at daily and yearly time scale. On average, the annual NEP balanced the carbon annually exported with the harvested apples. These data support the hypothesis of a minimal or null impact of the apple orchard ecosystem on net C emission to the atmosphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High spectral resolution radiative transfer (RT) codes are essential tools in the study of the radiative energy transfer in the Earth atmosphere and a support for the development of parameterizations for fast RT codes used in climate and weather prediction models. Cirrus clouds cover permanently 30% of the Earth's surface, representing an important contribution to the Earth-atmosphere radiation balance. The work has been focussed on the development of the RT model LBLMS. The model, widely tested in the infra-red spectral range, has been extended to the short wave spectrum and it has been used in comparison with airborne and satellite measurements to study the optical properties of cirrus clouds. A new database of single scattering properties has been developed for mid latitude cirrus clouds. Ice clouds are treated as a mixture of ice crystals with various habits. The optical properties of the mixture are tested in comparison to radiometric measurements in selected case studies. Finally, a parameterization of the mixture for application to weather prediction and global circulation models has been developed. The bulk optical properties of ice crystals are parameterized as functions of the effective dimension of measured particle size distributions that are representative of mid latitude cirrus clouds. Tests with the Limited Area Weather Prediction model COSMO have shown the impact of the new parameterization with respect to cirrus cloud optical properties based on ice spheres.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The β-Amyloid (βA) peptide is the major component of senile plaques that are one of the hallmarks of Alzheimer’s Disease (AD). It is well recognized that Aβ exists in multiple assembly states, such as soluble oligomers or insoluble fibrils, which affect neuronal viability and may contribute to disease progression. In particular, common βA-neurotoxic mechanisms are Ca2+ dyshomeostasis, reactive oxygen species (ROS) formation, altered signaling, mitochondrial dysfunction and neuronal death such as necrosis and apoptosis. Recent study shows that the ubiquitin-proteasome pathway play a crucial role in the degradation of short-lived and regulatory proteins that are important in a variety of basic and pathological cellular processes including apoptosis. Guanosine (Guo) is a purine nucleoside present extracellularly in brain that shows a spectrum of biological activities, both under physiological and pathological conditions. Recently it has become recognized that both neurons and glia also release guanine-based purines. However, the role of Guo in AD is still not well established. In this study, we investigated the machanism basis of neuroprotective effects of GUO against Aβ peptide-induced toxicity in neuronal (SH-SY5Y), in terms of mitochondrial dysfunction and translocation of phosphatidylserine (PS), a marker of apoptosis, using MTT and Annexin-V assay, respectively. In particular, treatment of SH-SY5Y cells with GUO (12,5-75 μM) in presence of monomeric βA25-35 (neurotoxic core of Aβ), oligomeric and fibrillar βA1-42 peptides showed a strong dose-dependent inhibitory effects on βA-induced toxic events. The maximum inhibition of mitochondrial function loss and PS translocation was observed with 75 μM of Guo. Subsequently, to investigate whether neuroprotection of Guo can be ascribed to its ability to modulate proteasome activity levels, we used lactacystin, a specific inhibitor of proteasome. We found that the antiapoptotic effects of Guo were completely abolished by lactacystin. To rule out the possibility that this effects resulted from an increase in proteasome activity by Guo, the chymotrypsin-like activity was assessed employing the fluorogenic substrate Z-LLL-AMC. The treatment of SH-SY5Y with Guo (75 μM for 0-6 h) induced a strong increase, in a time-dependent manner, of proteasome activity. In parallel, no increase of ubiquitinated protein levels was observed at similar experimental conditions adopted. We then evaluated an involvement of anti and pro-apoptotic proteins such as Bcl-2, Bad and Bax by western blot analysis. Interestingly, Bax levels decreased after 2 h treatment of SH-SY5Y with Guo. Taken together, these results demonstrate that Guo neuroprotective effects against βA-induced apoptosis are mediated, at least partly, via proteasome activation. In particular, these findings suggest a novel neuroprotective pathway mediated by Guo, which involves a rapid degradation of pro-apoptotic proteins by the proteasome. In conclusion, the present data, raise the possibility that Guo could be used as an agent for the treatment of AD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Images of a scene, static or dynamic, are generally acquired at different epochs from different viewpoints. They potentially gather information about the whole scene and its relative motion with respect to the acquisition device. Data from different (in the spatial or temporal domain) visual sources can be fused together to provide a unique consistent representation of the whole scene, even recovering the third dimension, permitting a more complete understanding of the scene content. Moreover, the pose of the acquisition device can be achieved by estimating the relative motion parameters linking different views, thus providing localization information for automatic guidance purposes. Image registration is based on the use of pattern recognition techniques to match among corresponding parts of different views of the acquired scene. Depending on hypotheses or prior information about the sensor model, the motion model and/or the scene model, this information can be used to estimate global or local geometrical mapping functions between different images or different parts of them. These mapping functions contain relative motion parameters between the scene and the sensor(s) and can be used to integrate accordingly informations coming from the different sources to build a wider or even augmented representation of the scene. Accordingly, for their scene reconstruction and pose estimation capabilities, nowadays image registration techniques from multiple views are increasingly stirring up the interest of the scientific and industrial community. Depending on the applicative domain, accuracy, robustness, and computational payload of the algorithms represent important issues to be addressed and generally a trade-off among them has to be reached. Moreover, on-line performance is desirable in order to guarantee the direct interaction of the vision device with human actors or control systems. This thesis follows a general research approach to cope with these issues, almost independently from the scene content, under the constraint of rigid motions. This approach has been motivated by the portability to very different domains as a very desirable property to achieve. A general image registration approach suitable for on-line applications has been devised and assessed through two challenging case studies in different applicative domains. The first case study regards scene reconstruction through on-line mosaicing of optical microscopy cell images acquired with non automated equipment, while moving manually the microscope holder. By registering the images the field of view of the microscope can be widened, preserving the resolution while reconstructing the whole cell culture and permitting the microscopist to interactively explore the cell culture. In the second case study, the registration of terrestrial satellite images acquired by a camera integral with the satellite is utilized to estimate its three-dimensional orientation from visual data, for automatic guidance purposes. Critical aspects of these applications are emphasized and the choices adopted are motivated accordingly. Results are discussed in view of promising future developments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: MPLC represents a diagnostic challenge. Topic of the discussion is how to distinguish these patients as a metastatic or a multifocal disease. While in case of the different histology there are less doubt on the opposite in case of same histology is mandatory to investigate on other clinical features to rule out this question. Matherials and Methods: A retrospective review identified all patients treated surgically for a presumed diagnosis of SPLC. Pre-operative staging was obtained with Total CT scan and fluoro-deoxy positron emission tomography and mediastinoscopy. Patients with nodes interest or extra-thoracic location were excluded from this study. Epidermal growth factor receptor (EGFR) expression with complete immunohistochemical analisis was evaluated. Survival was estimated using Kaplan-Meyer method, and clinical features were estimated using a long-rank test or Cox proportional hazards model for categorical and continuous variable, respectively. Results: According to American College Chest Physician, 18 patients underwent to surgical resection for a diagnosis of MPLC. Of these, 8 patients had 3 or more nodules while 10 patients had less than 3 nodules. Pathologic examination demonstrated that 13/18(70%) of patients with multiple histological types was Adenocarcinoma, 2/18(10%) Squamous carcinoma, 2/18(10%) large cell carcinoma and 1/18(5%) Adenosquamosu carcinoma. Expression of EGFR has been evaluated in all nodules: in 7 patients of 18 (38%) the percentage of expression of each nodule resulted different. Conclusions: MPLC represent a multifocal disease where interactions of clinical informations with biological studies reinforce the diagnosis. EGFR could contribute to differentiate the nodules. However, further researches are necessary to validate this hypothesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis is set in three different parts, according to the relative experimental models. First, the domestic pig (Sus scrofa) is part of the study on reproductive biotechnologies: the transgenesis technique of Sperm Mediated Gene Transfer is widely studied starting from the quality of the semen, through the study of multiple uptakes of exogenous DNA and lastly used in the production of multi-transgenic blastocysts. Finally we managed to couple the transgenesis pipeline with sperm sorting and therefore produced transgenic embryos of predetermined sex. In the second part of the thesis the attention is on the fruit fly (Drosophila melanogaster) and on its derived cell line: the S2 cells. The in vitro and in vivo models are used to develop and validate an efficient way to knock down the myc gene. First an efficient in vitro protocol is described, than we demonstrate how the decrease in myc transcript remarkably affects the ribosome biogenesis through the study of Polysome gradients, rRNA content and qPCR. In vivo we identified two optimal drivers for the conditional silencing of myc, once the flies are fed with RU486: the first one is throughout the whole body (Tubulin), while the second is a head fat body driver (S32). With these results we present a very efficient model to study the role of myc in multiple aspects of translation. In the third and last part, the focus is on human derived lung fibroblasts (hLF-1), mouse tail fibroblasts and mouse tissues. We developed an efficient assay to quantify the total protein content of the nucleus on a single cell level via fluorescence. We coupled the protocol with classical immunofluorescence so to have at the same time general and particular information, demonstrating that during senescence nuclear proteins increase by 1.8 fold either in human cells, mouse cells and mouse tissues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study focuses on the processes of change that firms undertake to overcome conditions of organizational rigidity and develop new dynamic capabilities, thanks to the contribution of external knowledge. When external contingencies highlight firms’ core rigidities, external actors can intervene in change projects, providing new competences to firms’ managers. Knowledge transfer and organizational learning processes can lead to the development of new dynamic capabilities. Existing literature does not completely explain how these processes develop and how external knowledge providers, as management consultants, influence them. Dynamic capabilities literature has become very rich in the last years; however, the models that explain how dynamic capabilities evolve are not particularly investigated. Adopting a qualitative approach, this research proposes four relevant case studies in which external actors introduce new knowledge within organizations, activating processes of change. Each case study consists of a management consulting project. Data are collected through in-depth interviews with consultants and managers. A large amount of documents supports evidences from interviews. A narrative approach is adopted to account for change processes and a synthetic approach is proposed to compare case studies along relevant dimensions. This study presents a model of capabilities evolution, supported by empirical evidence, to explain how external knowledge intervenes in capabilities evolution processes: first, external actors solve gaps between environmental demands and firms’ capabilities, changing organizational structures and routines; second, a knowledge transfer between consultants and managers leads to the creation of new ordinary capabilities; third, managers can develop new dynamic capabilities through a deliberate learning process that internalizes new tacit knowledge from consultants. After the end of the consulting project, two elements can influence the deliberate learning process: new external contingencies and changes in the perceptions about external actors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The instability of river bank can result in considerable human and land losses. The Po river is the most important in Italy, characterized by main banks of significant and constantly increasing height. This study presents multilayer perceptron of artificial neural network (ANN) to construct prediction models for the stability analysis of river banks along the Po River, under various river and groundwater boundary conditions. For this aim, a number of networks of threshold logic unit are tested using different combinations of the input parameters. Factor of safety (FS), as an index of slope stability, is formulated in terms of several influencing geometrical and geotechnical parameters. In order to obtain a comprehensive geotechnical database, several cone penetration tests from the study site have been interpreted. The proposed models are developed upon stability analyses using finite element code over different representative sections of river embankments. For the validity verification, the ANN models are employed to predict the FS values of a part of the database beyond the calibration data domain. The results indicate that the proposed ANN models are effective tools for evaluating the slope stability. The ANN models notably outperform the derived multiple linear regression models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.