13 resultados para Check-In
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Primary stability of stems in cementless total hip replacements is recognized to play a critical role for long-term survival and thus for the success of the overall surgical procedure. In Literature, several studies addressed this important issue. Different approaches have been explored aiming to evaluate the extent of stability achieved during surgery. Some of these are in-vitro protocols while other tools are coinceived for the post-operative assessment of prosthesis migration relative to the host bone. In vitro protocols reported in the literature are not exportable to the operating room. Anyway most of them show a good overall accuracy. The RSA, EBRA and the radiographic analysis are currently used to check the healing process of the implanted femur at different follow-ups, evaluating implant migration, occurance of bone resorption or osteolysis at the interface. These methods are important for follow up and clinical study but do not assist the surgeon during implantation. At the time I started my Ph.D Study in Bioengineering, only one study had been undertaken to measure stability intra-operatively. No follow-up was presented to describe further results obtained with that device. In this scenario, it was believed that an instrument that could measure intra-operatively the stability achieved by an implanted stem would consistently improve the rate of success. This instrument should be accurate and should give to the surgeon during implantation a quick answer concerning the stability of the implanted stem. With this aim, an intra-operative device was designed, developed and validated. The device is meant to help the surgeon to decide how much to press-fit the implant. It is essentially made of a torsional load cell, able to measure the extent of torque applied by the surgeon to test primary stability, an angular sensor that measure the relative angular displacement between stem and femur, a rigid connector that enable connecting the device to the stem, and all the electronics for signals conditioning. The device was successfully validated in-vitro, showing a good overall accuracy in discriminating stable from unstable implants. Repeatability tests showed that the device was reliable. A calibration procedure was then performed in order to convert the angular readout into a linear displacement measurement, which is an information clinically relevant and simple to read in real-time by the surgeon. The second study reported in my thesis, concerns the evaluation of the possibility to have predictive information regarding the primary stability of a cementless stem, by measuring the micromotion of the last rasp used by the surgeon to prepare the femoral canal. This information would be really useful to the surgeon, who could check prior to the implantation process if the planned stem size can achieve a sufficient degree of primary stability, under optimal press fitting conditions. An intra-operative tool was developed to this aim. It was derived from a previously validated device, which was adapted for the specific purpose. The device is able to measure the relative micromotion between the femur and the rasp, when a torsional load is applied. An in-vitro protocol was developed and validated on both composite and cadaveric specimens. High correlation was observed between one of the parameters extracted form the acquisitions made on the rasp and the stability of the corresponding stem, when optimally press-fitted by the surgeon. After tuning in-vitro the protocol as in a closed loop, verification was made on two hip patients, confirming the results obtained in-vitro and highlighting the independence of the rasp indicator from the bone quality, anatomy and preserving conditions of the tested specimens, and from the sharpening of the rasp blades. The third study is related to an approach that have been recently explored in the orthopaedic community, but that was already in use in other scientific fields. It is based on the vibration analysis technique. This method has been successfully used to investigate the mechanical properties of the bone and its application to evaluate the extent of fixation of dental implants has been explored, even if its validity in this field is still under discussion. Several studies have been published recently on the stability assessment of hip implants by vibration analysis. The aim of the reported study was to develop and validate a prototype device based on the vibration analysis technique to measure intra-operatively the extent of implant stability. The expected advantages of a vibration-based device are easier clinical use, smaller dimensions and minor overall cost with respect to other devices based on direct micromotion measurement. The prototype developed consists of a piezoelectric exciter connected to the stem and an accelerometer attached to the femur. Preliminary tests were performed on four composite femurs implanted with a conventional stem. The results showed that the input signal was repeatable and the output could be recorded accurately. The fourth study concerns the application of the device based on the vibration analysis technique to several cases, considering both composite and cadaveric specimens. Different degrees of bone quality were tested, as well as different femur anatomies and several levels of press-fitting were considered. The aim of the study was to verify if it is possible to discriminate between stable and quasi-stable implants, because this is the most challenging detection for the surgeon in the operation room. Moreover, it was possible to validate the measurement protocol by comparing the results of the acquisitions made with the vibration-based tool to two reference measurements made by means of a validated technique, and a validated device. The results highlighted that the most sensitive parameter to stability is the shift in resonance frequency of the stem-bone system, showing high correlation with residual micromotion on all the tested specimens. Thus, it seems possible to discriminate between many levels of stability, from the grossly loosened implant, through the quasi-stable implants, to the definitely stable one. Finally, an additional study was performed on a different type of hip prosthesis, which has recently gained great interest thus becoming fairly popular in some countries in the last few years: the hip resurfacing prosthesis. The study was motivated by the following rationale: although bone-prosthesis micromotion is known to influence the stability of total hip replacement, its effect on the outcome of resurfacing implants has not been investigated in-vitro yet, but only clinically. Thus the work was aimed at verifying if it was possible to apply to the resurfacing prosthesis one of the intraoperative devices just validated for the measurement of the micromotion in the resurfacing implants. To do that, a preliminary study was performed in order to evaluate the extent of migration and the typical elastic movement for an epiphyseal prosthesis. An in-vitro procedure was developed to measure micromotions of resurfacing implants. This included a set of in-vitro loading scenarios that covers the range of directions covered by hip resultant forces in the most typical motor-tasks. The applicability of the protocol was assessed on two different commercial designs and on different head sizes. The repeatability and reproducibility were excellent (comparable to the best previously published protocols for standard cemented hip stems). Results showed that the procedure is accurate enough to detect micromotions of the order of few microns. The protocol proposed was thus completely validated. The results of the study demonstrated that the application of an intra-operative device to the resurfacing implants is not necessary, as the typical micromovement associated to this type of prosthesis could be considered negligible and thus not critical for the stabilization process. Concluding, four intra-operative tools have been developed and fully validated during these three years of research activity. The use in the clinical setting was tested for one of the devices, which could be used right now by the surgeon to evaluate the degree of stability achieved through the press-fitting procedure. The tool adapted to be used on the rasp was a good predictor of the stability of the stem. Thus it could be useful for the surgeon while checking if the pre-operative planning was correct. The device based on the vibration technique showed great accuracy, small dimensions, and thus has a great potential to become an instrument appreciated by the surgeon. It still need a clinical evaluation, and must be industrialized as well. The in-vitro tool worked very well, and can be applied for assessing resurfacing implants pre-clinically.
Resumo:
Lipolysis and oxidation of lipids in foods are the major biochemical and chemical processes that cause food quality deterioration, leading to the characteristic, unpalatable odour and flavour called rancidity. In addition to unpalatability, rancidity may give rise to toxic levels of certain compounds like aldehydes, hydroperoxides, epoxides and cholesterol oxidation products. In this PhD study chromatographic and spectroscopic techniques were employed to determine the degree of rancidity in different animal products and its relationship with technological parameters like feeding fat sources, packaging, processing and storage conditions. To achieve this goal capillary gas chromatography (CGC) was employed not only to determine the fatty acids profile but also, after solid phase extraction, the amount of free fatty acids (FFA), diglycerides (DG), sterols (cholesterol and phytosterols) and cholesterol oxidation products (COPs). To determine hydroperoxides, primary products of oxidation and quantify secondary products UV/VIS absorbance spectroscopy was applied. Most of the foods analysed in this study were meat products. In actual fact, lipid oxidation is a major deterioration reaction in meat and meat products and results in adverse changes in the colour, flavour and texture of meat. The development of rancidity has long recognized as a serious problem during meat handling, storage and processing. On a dairy product, a vegetal cream, a study of lipid fraction and development of rancidity during storage was carried out to evaluate its shelf-life and some nutritional features life saturated/unsaturated fatty acids ratio and phytosterols content. Then, according to the interest that has been growing around functional food in the last years, a new electrophoretic method was optimized and compared with HPLC to check the quality of a beehive product like royal jelly. This manuscript reports the main results obtained in the five activities briefly summarized as follows: 1) comparison between HPLC and a new electrophoretic method in the evaluation of authenticity of royal jelly; 2) study of the lipid fraction of a vegetal cream under different storage conditions; 3) study of lipid oxidation in minced beef during storage under a modified atmosphere packaging, before and after cooking; 4) evaluation of the influence of dietary fat and processing on the lipid fraction of chicken patties; 5) study of the lipid fraction of typical Italian and Spanish pork dry sausages and cured hams.
Resumo:
In this thesis we focussed on the characterization of the reaction center (RC) protein purified from the photosynthetic bacterium Rhodobacter sphaeroides. In particular, we discussed the effects of native and artificial environment on the light-induced electron transfer processes. The native environment consist of the inner antenna LH1 complex that copurifies with the RC forming the so called core complex, and the lipid phase tightly associated with it. In parallel, we analyzed the role of saccharidic glassy matrices on the interplay between electron transfer processes and internal protein dynamics. As a different artificial matrix, we incorporated the RC protein in a layer-by-layer structure with a twofold aim: to check the behaviour of the protein in such an unusual environment and to test the response of the system to herbicides. By examining the RC in its native environment, we found that the light-induced charge separated state P+QB - is markedly stabilized (by about 40 meV) in the core complex as compared to the RC-only system over a physiological pH range. We also verified that, as compared to the average composition of the membrane, the core complex copurifies with a tightly bound lipid complement of about 90 phospholipid molecules per RC, which is strongly enriched in cardiolipin. In parallel, a large ubiquinone pool was found in association with the core complex, giving rise to a quinone concentration about ten times larger than the average one in the membrane. Moreover, this quinone pool is fully functional, i.e. it is promptly available at the QB site during multiple turnover excitation of the RC. The latter two observations suggest important heterogeneities and anisotropies in the native membranes which can in principle account for the stabilization of the charge separated state in the core complex. The thermodynamic and kinetic parameters obtained in the RC-LH1 complex are very close to those measured in intact membranes, indicating that the electron transfer properties of the RC in vivo are essentially determined by its local environment. The studies performed by incorporating the RC into saccharidic matrices evidenced the relevance of solvent-protein interactions and dynamical coupling in determining the kinetics of electron transfer processes. The usual approach when studying the interplay between internal motions and protein function consists in freezing the degrees of freedom of the protein at cryogenic temperature. We proved that the “trehalose approach” offers distinct advantages with respect to this traditional methodology. We showed, in fact, that the RC conformational dynamics, coupled to specific electron transfer processes, can be modulated by varying the hydration level of the trehalose matrix at room temperature, thus allowing to disentangle solvent from temperature effects. The comparison between different saccharidic matrices has revealed that the structural and dynamical protein-matrix coupling depends strongly upon the sugar. The analyses performed in RCs embedded in polyelectrolyte multilayers (PEM) structures have shown that the electron transfer from QA - to QB, a conformationally gated process extremely sensitive to the RC environment, can be strongly modulated by the hydration level of the matrix, confirming analogous results obtained for this electron transfer reaction in sugar matrices. We found that PEM-RCs are a very stable system, particularly suitable to study the thermodynamics and kinetics of herbicide binding to the QB site. These features make PEM-RC structures quite promising in the development of herbicide biosensors. The studies discussed in the present thesis have shown that, although the effects on electron transfer induced by the native and artificial environments tested are markedly different, they can be described on the basis of a common kinetic model which takes into account the static conformational heterogeneity of the RC and the interconversion between conformational substates. Interestingly, the same distribution of rate constants (i.e. a Gamma distribution function) can describe charge recombination processes in solutions of purified RC, in RC-LH1 complexes, in wet and dry RC-PEM structures and in glassy saccharidic matrices over a wide range of hydration levels. In conclusion, the results obtained for RCs in different physico-chemical environments emphasize the relevance of the structure/dynamics solvent/protein coupling in determining the energetics and the kinetics of electron transfer processes in a membrane protein complex.
Resumo:
This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.
Resumo:
In case of severe osteoarthritis at the knee causing pain, deformity, and loss of stability and mobility, the clinicians consider that the substitution of these surfaces by means of joint prostheses. The objectives to be pursued by this surgery are: complete pain elimination, restoration of the normal physiological mobility and joint stability, correction of all deformities and, thus, of limping. The knee surgical navigation systems have bee developed in computer-aided surgery in order to improve the surgical final outcome in total knee arthroplasty. These systems provide the surgeon with quantitative and real-time information about each surgical action, like bone cut executions and prosthesis component alignment, by mean of tracking tools rigidly fixed onto the femur and the tibia. Nevertheless, there is still a margin of error due to the incorrect surgical procedures and to the still limited number of kinematic information provided by the current systems. Particularly, patello-femoral joint kinematics is not considered in knee surgical navigation. It is also unclear and, thus, a source of misunderstanding, what the most appropriate methodology is to study the patellar motion. In addition, also the knee ligamentous apparatus is superficially considered in navigated total knee arthroplasty, without taking into account how their physiological behavior is altered by this surgery. The aim of the present research work was to provide new functional and biomechanical assessments for the improvement of the surgical navigation systems for joint replacement in the human lower limb. This was mainly realized by means of the identification and development of new techniques that allow a thorough comprehension of the functioning of the knee joint, with particular attention to the patello-femoral joint and to the main knee soft tissues. A knee surgical navigation system with active markers was used in all research activities presented in this research work. Particularly, preliminary test were performed in order to assess the system accuracy and the robustness of a number of navigation procedures. Four studies were performed in-vivo on patients requiring total knee arthroplasty and randomly implanted by means of traditional and navigated procedures in order to check for the real efficacy of the latter with respect to the former. In order to cope with assessment of patello-femoral joint kinematics in the intact and replaced knees, twenty in-vitro tests were performed by using a prototypal tracking tool also for the patella. In addition to standard anatomical and articular recommendations, original proposals for defining the patellar anatomical-based reference frame and for studying the patello-femoral joint kinematics were reported and used in these tests. These definitions were applied to two further in-vitro tests in which, for the first time, also the implant of patellar component insert was fully navigated. In addition, an original technique to analyze the main knee soft tissues by means of anatomical-based fiber mappings was also reported and used in the same tests. The preliminary instrumental tests revealed a system accuracy within the millimeter and a good inter- and intra-observer repeatability in defining all anatomical reference frames. In in-vivo studies, the general alignments of femoral and tibial prosthesis components and of the lower limb mechanical axis, as measured on radiographs, was more satisfactory, i.e. within ±3°, in those patient in which total knee arthroplasty was performed by navigated procedures. As for in-vitro tests, consistent patello-femoral joint kinematic patterns were observed over specimens throughout the knee flexion arc. Generally, the physiological intact knee patellar motion was not restored after the implant. This restoration was successfully achieved in the two further tests where all component implants, included the patellar insert, were fully navigated, i.e. by means of intra-operative assessment of also patellar component positioning and general tibio-femoral and patello-femoral joint assessment. The tests for assessing the behavior of the main knee ligaments revealed the complexity of the latter and the different functional roles played by the several sub-bundles compounding each ligament. Also in this case, total knee arthroplasty altered the physiological behavior of these knee soft tissues. These results reveal in-vitro the relevance and the feasibility of the applications of new techniques for accurate knee soft tissues monitoring, patellar tracking assessment and navigated patellar resurfacing intra-operatively in the contest of the most modern operative techniques. This present research work gives a contribution to the much controversial knowledge on the normal and replaced of knee kinematics by testing the reported new methodologies. The consistence of these results provides fundamental information for the comprehension and improvements of knee orthopedic treatments. In the future, the reported new techniques can be safely applied in-vivo and also adopted in other joint replacements.
Resumo:
The recent widespread diffusion of radio-frequency identification (RFID) applications operating in the UHF band has been supported by both the request for greater interrogation ranges and greater and faster data exchange. UHF-RFID systems, exploiting a physical interaction based on Electromagnetic propagation, introduce many problems that have not been fully explored for the previous generations of RFID systems (e.g. HF). Therefore, the availability of reliable tools for modeling and evaluating the radio-communication between Reader and Tag within an RFID radio-link are needed. The first part of the thesis discuss the impact of real environment on system performance. In particular an analytical closed form formulation for the back-scattered field from the Tag antenna and the formulation for the lower bound of the BER achievable at the Reader side will be presented, considering different possible electromagnetic impairments. By means of the previous formulations, of the analysis of the RFID link operating in near filed conditions and of some electromagnetic/system-level co-simulations, an in-depth study of the dimensioning parameters and the actual performance of the systems will be discussed and analyzed, showing some relevant properties and trade-offs in transponder and reader design. Moreover a new low cost approach to extend the read range of the RFID UHF passive systems will be discussed. Within the scope to check the reliability of the analysis approaches and of innovative proposals, some reference transponder antennas have been designed and extensive measurement campaign has been carried out with satisfactory results. Finally, some commercial ad-hoc transponder for industrial application have been designed within the cooperation with Datalogic s.p.a., some guidelines and results will be briefly presented.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Self-incompatibility (SI) systems have evolved in many flowering plants to prevent self-fertilization and thus promote outbreeding. Pear and apple, as many of the species belonging to the Rosaceae, exhibit RNase-mediated gametophytic self-incompatibility, a widespread system carried also by the Solanaceae and Plantaginaceae. Pear orchards must for this reason contain at least two different cultivars that pollenize each other; to guarantee an efficient cross-pollination, they should have overlapping flowering periods and must be genetically compatible. This compatibility is determined by the S-locus, containing at least two genes encoding for a female (pistil) and a male (pollen) determinant. The female determinant in the Rosaceae, Solanaceae and Plantaginaceae system is a stylar glycoprotein with ribonuclease activity (S-RNase), that acts as a specific cytotoxin in incompatible pollen tubes degrading cellular RNAs. Since its identification, the S-RNase gene has been intensively studied and the sequences of a large number of alleles are available in online databases. On the contrary, the male determinant has been only recently identified as a pollen-expressed protein containing a F-box motif, called S-Locus F-box (abbreviated SLF or SFB). Since F-box proteins are best known for their participation to the SCF (Skp1 - Cullin - F-box) E3 ubiquitine ligase enzymatic complex, that is involved in protein degradation through the 26S proteasome pathway, the male determinant is supposed to act mediating the ubiquitination of the S-RNases, targeting them for the degradation in compatible pollen tubes. Attempts to clone SLF/SFB genes in the Pyrinae produced no results until very recently; in apple, the use of genomic libraries allowed the detection of two F-box genes linked to each S haplotype, called SFBB (S-locus F-Box Brothers). In Japanese pear, three SFBB genes linked to each haplotype were cloned from pollen cDNA. The SFBB genes exhibit S haplotype-specific sequence divergence and pollen-specific expression; their multiplicity is a feature whose interpretation is unclear: it has been hypothesized that all of them participate in the S-specific interaction with the RNase, but it is also possible that only one of them is involved in this function. Moreover, even if the S locus male and female determinants are the only responsible for the specificity of the pollen-pistil recognition, many other factors are supposed to play a role in GSI; these are not linked to the S locus and act in a S-haplotype independent manner. They can have a function in regulating the expression of S determinants (group 1 factors), modulating their activity (group 2) or acting downstream, in the accomplishment of the reaction of acceptance or rejection of the pollen tube (group 3). This study was aimed to the elucidation of the molecular mechanism of GSI in European pear (Pyrus communis) as well as in the other Pyrinae; it was divided in two parts, the first focusing on the characterization of male determinants, and the second on factors external to the S locus. The research of S locus F-box genes was primarily aimed to the identification of such genes in European pear, for which sequence data are still not available; moreover, it allowed also to investigate about the S locus structure in the Pyrinae. The analysis was carried out on a pool of varieties of the three species Pyrus communis (European pear), Pyrus pyrifolia (Japanese pear), and Malus × domestica (apple); varieties carrying S haplotypes whose RNases are highly similar were chosen, in order to check whether or not the same level of similarity is maintained also between the male determinants. A total of 82 sequences was obtained, 47 of which represent the first S-locus F-box genes sequenced from European pear. The sequence data strongly support the hypothesis that the S locus structure is conserved among the three species, and presumably among all the Pyrinae; at least five genes have homologs in the analysed S haplotypes, but the number of F-box genes surrounding the S-RNase could be even greater. The high level of sequence divergence and the similarity between alleles linked to highly conserved RNases, suggest a shared ancestral polymorphism also for the F-box genes. The F-box genes identified in European pear were mapped on a segregating population of 91 individuals from the cross 'Abbé Fétel' × 'Max Red Bartlett'. All the genes were placed on the linkage group 17, where the S locus has been placed both in pear and apple maps, and resulted strongly associated to the S-RNase gene. The linkage with the RNase was perfect for some of the F-box genes, while for others very rare single recombination events were identified. The second part of this study was focused on the research of other genes involved in the SI response in pear; it was aimed on one side to the identification of genes differentially expressed in compatible and incompatible crosses, and on the other to the cloning and characterization of the transglutaminase (TGase) gene, whose role may be crucial in pollen rejection. For the identification of differentially expressed genes, controlled pollinations were carried out in four combinations (self pollination, incompatible, half-compatible and fully compatible cross-pollination); expression profiles were compared through cDNA-AFLP. 28 fragments displaying an expression pattern related to compatibility or incompatibility were identified, cloned and sequenced; the sequence analysis allowed to assign a putative annotation to a part of them. The identified genes are involved in very different cellular processes or in defense mechanisms, suggesting a very complex change in gene expression following the pollen/pistil recognition. The pool of genes identified with this technique offers a good basis for further study toward a better understanding of how the SI response is carried out. Among the factors involved in SI response, moreover, an important role may be played by transglutaminase (TGase), an enzyme involved both in post-translational protein modification and in protein cross-linking. The TGase activity detected in pear styles was significantly higher when pollinated in incompatible combinations than in compatible ones, suggesting a role of this enzyme in the abnormal cytoskeletal reorganization observed during pollen rejection reaction. The aim of this part of the work was thus to identify and clone the pear TGase gene; the PCR amplification of fragments of this gene was achieved using primers realized on the alignment between the Arabidopsis TGase gene sequence and several apple EST fragments; the full-length coding sequence of the pear TGase gene was then cloned from cDNA, and provided a precious tool for further study of the in vitro and in vivo action of this enzyme.
Resumo:
Researches performed during the PhD course intended to assess innovative applications of near-infrared spectroscopy in reflectance (NIR) in the production chain of beer. The purpose is to measure by NIR the "malting quality" (MQ) parameter of barley, to monitor the malting process and to know if a certain type of barley is suitable for the production of beer and spirits. Moreover, NIR will be applied to monitor the brewing process. First of all, it was possible to check the quality of the raw materials like barley, maize and barley malt using a rapid, non-destructive and reliable method, with a low error of prediction. The more interesting result obtained at this level was that the repeatability of the NIR calibration models developed was comparable with the one of the reference method. Moreover, about malt, new kinds of validation were used in order to estimate the real predictive power of the proposed calibration models and to understand the long-term effects. Furthermore, the precision of all the calibration models developed for malt evaluation was estimated and statistically compared with the reference methods, with good results. Then, new calibration models were developed for monitoring the malting process, measuring the moisture content and other malt quality parameters during germination. Moreover it was possible to obtain by NIR an estimate of the "malting quality" (MQ) of barley and to predict whether if its germination will be rapid and uniform and if a certain type of barley is suitable for the production of beer and spirits. Finally, the NIR technique was applied to monitor the brewing process, using correlations between NIR spectra of beer and analytical parameters, and to assess beer quality. These innovative results are potentially very useful for the actors involved in the beer production chain, especially the calibration models suitable for the control of the malting process and for the assessment of the “malting quality” of barley, which need to be deepened in future studies.
Resumo:
Carbon fluxes and allocation pattern, and their relationship with the main environmental and physiological parameters, were studied in an apple orchard for one year (2010). I combined three widely used methods: eddy covariance, soil respiration and biometric measurements, and I applied a measurement protocol allowing a cross-check between C fluxes estimated using different methods. I attributed NPP components to standing biomass increment, detritus cycle and lateral export. The influence of environmental and physiological parameters on NEE, GPP and Reco was analyzed with a multiple regression model approach. I found that both NEP and GPP of the apple orchard were of similar magnitude to those of forests growing in similar climate conditions, while large differences occurred in the allocation pattern and in the fate of produced biomass. Apple production accounted for 49% of annual NPP, organic material (leaves, fine root litter, pruned wood and early fruit drop) contributing to detritus cycle was 46%, and only 5% went to standing biomass increment. The carbon use efficiency (CUE), with an annual average of 0.68 ± 0.10, was higher than the previously suggested constant values of 0.47-0.50. Light and leaf area index had the strongest influence on both NEE and GPP. On a diurnal basis, NEE and GPP reached their peak approximately at noon, while they appeared to be limited by high values of VPD and air temperature in the afternoon. The proposed models can be used to explain and simulate current relations between carbon fluxes and environmental parameters at daily and yearly time scale. On average, the annual NEP balanced the carbon annually exported with the harvested apples. These data support the hypothesis of a minimal or null impact of the apple orchard ecosystem on net C emission to the atmosphere.
Resumo:
The Thermodynamic Bethe Ansatz analysis is carried out for the extended-CP^N class of integrable 2-dimensional Non-Linear Sigma Models related to the low energy limit of the AdS_4xCP^3 type IIA superstring theory. The principal aim of this program is to obtain further non-perturbative consistency check to the S-matrix proposed to describe the scattering processes between the fundamental excitations of the theory by analyzing the structure of the Renormalization Group flow. As a noteworthy byproduct we eventually obtain a novel class of TBA models which fits in the known classification but with several important differences. The TBA framework allows the evaluation of some exact quantities related to the conformal UV limit of the model: effective central charge, conformal dimension of the perturbing operator and field content of the underlying CFT. The knowledge of this physical quantities has led to the possibility of conjecturing a perturbed CFT realization of the integrable models in terms of coset Kac-Moody CFT. The set of numerical tools and programs developed ad hoc to solve the problem at hand is also discussed in some detail with references to the code.
Resumo:
La ricerca riguarda lo studio del cantiere edilizio protobizantino, con particolare riferimento al ciclo della lavorazione del marmo. Quest’ultimo viene analizzato sotto il profilo amministrativo, tecnico, sociale ed artigianale. L’elemento guida della ricerca sono i marchi dei marmorari, sigle apposte da funzionari e maestranze durante il processo produttivo. Dapprima, fonti letterarie ed epigrafiche, tra cui le sigle di cava e officina su marmo, vengono esaminate per ricostruire il sistema alto-imperiale di amministrazione delle cave e di gestione dei flussi marmorei, nonché l’iter tecnico-artigianale adottato per la produzione dei manufatti. Il confronto con i dati disponibili per la tarda antichità, con particolare riferimento alle cave di Proconneso, evidenzia una sostanziale continuità della prassi burocratico-amministrativa, mentre alcuni cambiamenti si riscontrano nell’ambito produttivo-artigianale. Il funzionamento degli atelier marmorari viene approfondito attraverso lo studio dei marchi dei marmorari. Si tratta di caratteri greci singoli, multipli o monogrammi. Una ricognizione sistematica delle sigle dalla pars Orientalis dell’impero, reperite in bibliografia o da ricognizioni autoptiche, ha portato alla raccolta di circa 2360 attestazioni. Per esse si propone una classificazione tipologica tra sigle di cava, stoccaggio, officina. Tra le sigle di cava si annoverano sigle di controllo, destinazione/committenza, assemblaggio/posizionamento. Una particolare attenzione è riservata alle sigle di officina, riferibili ad un nome proprio di persona, ovvero al πρωτομαΐστωρ, il capo-bottega che supervisionava il lavoro dei propri artigiani e fungeva da garante del prodotto consegnato alla committenza. Attraverso lo studio comparato delle sigle reperite a Costantinopoli e in altri contesti si mette in luce la prassi operativa adottata dagli atelier nei processi di manifattura, affrontando anche il problema delle maestranze itineranti. Infine, sono analizzate fonti scritte di varia natura per poter collocare il fenomeno del marmo in un contesto socio-economico più ampio, con particolare riferimento alle figure professionali ed artigianali coinvolte nei cantieri e al problema della committenza.
Resumo:
This work presents first a study of the national and international laws in the fields of safety, security and safeguards. The international treaties and the recommendations issued by the IAEA as well as the national regulations in force in France, the United States and Italy are analyzed. As a result of this, a comparison among them is presented. Given the interest of the Japan Atomic Energy Agency for the aspects of criminal penalties and monetary, also the Japanese case is analyzed. The main part of this work was held at the JAEA in the field of proliferation resistance (PR) and physical protection (PP) of a GEN IV sodium fast reactor. For this purpose the design of the system is completed and the PR & PP methodology is applied to obtain data usable by designers for the improvement of the system itself. Due to the presence of sensitive data, not all the details can be disclosed. The reactor site of a hypothetical and commercial sodium-cooled fast neutron nuclear reactor system (SFR) is used as the target NES for the application of the methodology. The methodology is applied to all the PR and PP scenarios: diversion, misuse and breakout; theft and sabotage. The methodology is applied to the SFR to check if this system meets the target of PR and PP as described in the GIF goal; secondly, a comparison between the SFR and a LWR is performed to evaluate if and how it would be possible to improve the PR&PP of the SFR. The comparison is implemented according to the example development target: achieving PR&PP similar or superior to domestic and international ALWR. Three main actions were performed: implement the evaluation methodology; characterize the PR&PP for the nuclear energy system; identify recommendations for system designers through the comparison.