30 resultados para Supernatural in literature
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Primary stability of stems in cementless total hip replacements is recognized to play a critical role for long-term survival and thus for the success of the overall surgical procedure. In Literature, several studies addressed this important issue. Different approaches have been explored aiming to evaluate the extent of stability achieved during surgery. Some of these are in-vitro protocols while other tools are coinceived for the post-operative assessment of prosthesis migration relative to the host bone. In vitro protocols reported in the literature are not exportable to the operating room. Anyway most of them show a good overall accuracy. The RSA, EBRA and the radiographic analysis are currently used to check the healing process of the implanted femur at different follow-ups, evaluating implant migration, occurance of bone resorption or osteolysis at the interface. These methods are important for follow up and clinical study but do not assist the surgeon during implantation. At the time I started my Ph.D Study in Bioengineering, only one study had been undertaken to measure stability intra-operatively. No follow-up was presented to describe further results obtained with that device. In this scenario, it was believed that an instrument that could measure intra-operatively the stability achieved by an implanted stem would consistently improve the rate of success. This instrument should be accurate and should give to the surgeon during implantation a quick answer concerning the stability of the implanted stem. With this aim, an intra-operative device was designed, developed and validated. The device is meant to help the surgeon to decide how much to press-fit the implant. It is essentially made of a torsional load cell, able to measure the extent of torque applied by the surgeon to test primary stability, an angular sensor that measure the relative angular displacement between stem and femur, a rigid connector that enable connecting the device to the stem, and all the electronics for signals conditioning. The device was successfully validated in-vitro, showing a good overall accuracy in discriminating stable from unstable implants. Repeatability tests showed that the device was reliable. A calibration procedure was then performed in order to convert the angular readout into a linear displacement measurement, which is an information clinically relevant and simple to read in real-time by the surgeon. The second study reported in my thesis, concerns the evaluation of the possibility to have predictive information regarding the primary stability of a cementless stem, by measuring the micromotion of the last rasp used by the surgeon to prepare the femoral canal. This information would be really useful to the surgeon, who could check prior to the implantation process if the planned stem size can achieve a sufficient degree of primary stability, under optimal press fitting conditions. An intra-operative tool was developed to this aim. It was derived from a previously validated device, which was adapted for the specific purpose. The device is able to measure the relative micromotion between the femur and the rasp, when a torsional load is applied. An in-vitro protocol was developed and validated on both composite and cadaveric specimens. High correlation was observed between one of the parameters extracted form the acquisitions made on the rasp and the stability of the corresponding stem, when optimally press-fitted by the surgeon. After tuning in-vitro the protocol as in a closed loop, verification was made on two hip patients, confirming the results obtained in-vitro and highlighting the independence of the rasp indicator from the bone quality, anatomy and preserving conditions of the tested specimens, and from the sharpening of the rasp blades. The third study is related to an approach that have been recently explored in the orthopaedic community, but that was already in use in other scientific fields. It is based on the vibration analysis technique. This method has been successfully used to investigate the mechanical properties of the bone and its application to evaluate the extent of fixation of dental implants has been explored, even if its validity in this field is still under discussion. Several studies have been published recently on the stability assessment of hip implants by vibration analysis. The aim of the reported study was to develop and validate a prototype device based on the vibration analysis technique to measure intra-operatively the extent of implant stability. The expected advantages of a vibration-based device are easier clinical use, smaller dimensions and minor overall cost with respect to other devices based on direct micromotion measurement. The prototype developed consists of a piezoelectric exciter connected to the stem and an accelerometer attached to the femur. Preliminary tests were performed on four composite femurs implanted with a conventional stem. The results showed that the input signal was repeatable and the output could be recorded accurately. The fourth study concerns the application of the device based on the vibration analysis technique to several cases, considering both composite and cadaveric specimens. Different degrees of bone quality were tested, as well as different femur anatomies and several levels of press-fitting were considered. The aim of the study was to verify if it is possible to discriminate between stable and quasi-stable implants, because this is the most challenging detection for the surgeon in the operation room. Moreover, it was possible to validate the measurement protocol by comparing the results of the acquisitions made with the vibration-based tool to two reference measurements made by means of a validated technique, and a validated device. The results highlighted that the most sensitive parameter to stability is the shift in resonance frequency of the stem-bone system, showing high correlation with residual micromotion on all the tested specimens. Thus, it seems possible to discriminate between many levels of stability, from the grossly loosened implant, through the quasi-stable implants, to the definitely stable one. Finally, an additional study was performed on a different type of hip prosthesis, which has recently gained great interest thus becoming fairly popular in some countries in the last few years: the hip resurfacing prosthesis. The study was motivated by the following rationale: although bone-prosthesis micromotion is known to influence the stability of total hip replacement, its effect on the outcome of resurfacing implants has not been investigated in-vitro yet, but only clinically. Thus the work was aimed at verifying if it was possible to apply to the resurfacing prosthesis one of the intraoperative devices just validated for the measurement of the micromotion in the resurfacing implants. To do that, a preliminary study was performed in order to evaluate the extent of migration and the typical elastic movement for an epiphyseal prosthesis. An in-vitro procedure was developed to measure micromotions of resurfacing implants. This included a set of in-vitro loading scenarios that covers the range of directions covered by hip resultant forces in the most typical motor-tasks. The applicability of the protocol was assessed on two different commercial designs and on different head sizes. The repeatability and reproducibility were excellent (comparable to the best previously published protocols for standard cemented hip stems). Results showed that the procedure is accurate enough to detect micromotions of the order of few microns. The protocol proposed was thus completely validated. The results of the study demonstrated that the application of an intra-operative device to the resurfacing implants is not necessary, as the typical micromovement associated to this type of prosthesis could be considered negligible and thus not critical for the stabilization process. Concluding, four intra-operative tools have been developed and fully validated during these three years of research activity. The use in the clinical setting was tested for one of the devices, which could be used right now by the surgeon to evaluate the degree of stability achieved through the press-fitting procedure. The tool adapted to be used on the rasp was a good predictor of the stability of the stem. Thus it could be useful for the surgeon while checking if the pre-operative planning was correct. The device based on the vibration technique showed great accuracy, small dimensions, and thus has a great potential to become an instrument appreciated by the surgeon. It still need a clinical evaluation, and must be industrialized as well. The in-vitro tool worked very well, and can be applied for assessing resurfacing implants pre-clinically.
Resumo:
The main problem connected to cone beam computed tomography (CT) systems for industrial applications employing 450 kV X-ray tubes is the high amount of scattered radiation which is added to the primary radiation (signal). This stray radiation leads to a significant degradation of the image quality. A better understanding of the scattering and methods to reduce its effects are therefore necessary to improve the image quality. Several studies have been carried out in the medical field at lower energies, whereas studies in industrial CT, especially for energies up to 450 kV, are lacking. Moreover, the studies reported in literature do not consider the scattered radiation generated by the CT system structure and the walls of the X-ray room (environmental scatter). In order to investigate the scattering on CT projections a GEANT4-based Monte Carlo (MC) model was developed. The model, which has been validated against experimental data, has enabled the calculation of the scattering including the environmental scatter, the optimization of an anti-scatter grid suitable for the CT system, and the optimization of the hardware components of the CT system. The investigation of multiple scattering in the CT projections showed that its contribution is 2.3 times the one of primary radiation for certain objects. The results of the environmental scatter showed that it is the major component of the scattering for aluminum box objects of front size 70 x 70 mm2 and that it strongly depends on the thickness of the object and therefore on the projection. For that reason, its correction is one of the key factors for achieving high quality images. The anti-scatter grid optimized by means of the developed MC model was found to reduce the scatter-toprimary ratio in the reconstructed images by 20 %. The object and environmental scatter calculated by means of the simulation were used to improve the scatter correction algorithm which could be patented by Empa. The results showed that the cupping effect in the corrected image is strongly reduced. The developed CT simulation is a powerful tool to optimize the design of the CT system and to evaluate the contribution of the scattered radiation to the image. Besides, it has offered a basis for a new scatter correction approach by which it has been possible to achieve images with the same spatial resolution as state-of-the-art well collimated fan-beam CT with a gain in the reconstruction time of a factor 10. This result has a high economic impact in non-destructive testing and evaluation, and reverse engineering.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
CHAPTER 1:FLUID-VISCOUS DAMPERS In this chapter the fluid-viscous dampers are introduced. The first section is focused on the technical characteristics of these devices, their mechanical behavior and the latest evolution of the technology whose they are equipped. In the second section we report the definitions and the guide lines about the design of these devices included in some international codes. In the third section the results of some experimental tests carried out by some authors on the response of these devices to external forces are discussed. On this purpose we report some technical schedules that are usually enclosed to the devices now available on the international market. In the third section we show also some analytic models proposed by various authors, which are able to describe efficiently the physical behavior of the fluid-viscous dampers. In the last section we propose some cases of application of these devices on existing structures and on new-construction structures. We show also some cases in which these devices have been revealed good for aims that lies outside the reduction of seismic actions on the structures. CHAPTER 2:DESIGN METHODS PROPOSED IN LITERATURE In this chapter the more widespread design methods proposed in literature for structures equipped by fluid-viscous dampers are introduced. In the first part the response of sdf systems in the case of harmonic external force is studied, in the last part the response in the case of random external force is discussed. In the first section the equations of motion in the case of an elastic-linear sdf system equipped with a non-linear fluid-viscous damper undergoing a harmonic force are introduced. This differential problem is analytically quite complex and it’s not possible to be solved in a closed form. Therefore some authors have proposed approximate solution methods. The more widespread methods are based on equivalence principles between a non-linear device and an equivalent linear one. Operating in this way it is possible to define an equivalent damping ratio and the problem becomes linear; the solution of the equivalent problem is well-known. In the following section two techniques of linearization, proposed by some authors in literature, are described: the first technique is based on the equivalence of the energy dissipated by the two devices and the second one is based on the equivalence of power consumption. After that we compare these two techniques by studying the response of a sdf system undergoing a harmonic force. By introducing the equivalent damping ratio we can write the equation of motion of the non-linear differential problem in an implicit form, by dividing, as usual, for the mass of the system. In this way, we get a reduction of the number of variables, by introducing the natural frequency of the system. The equation of motion written in this form has two important properties: the response is linear dependent on the amplitude of the external force and the response is dependent on the ratio of the frequency of the external harmonic force and the natural frequency of the system only, and not on their single values. All these considerations, in the last section, are extended to the case of a random external force. CHAPTER 3: DESIGN METHOD PROPOSED In this chapter the theoretical basis of the design method proposed are introduced. The need to propose a new design method for structures equipped with fluid-viscous dampers arises from the observation that the methods reported in literature are always iterative, because the response affects some parameters included in the equation of motion (such as the equivalent damping ratio). In the first section the dimensionless parameterε is introduced. This parameter has been obtained from the definition of equivalent damping ratio. The implicit form of the equation of motion is written by introducing the parameter ε, instead of the equivalent damping ratio. This new implicit equation of motions has not any terms affected by the response, so that once ε is known the response can be evaluated directly. In the second section it is discussed how the parameter ε affects some characteristics of the response: drift, velocity and base shear. All the results described till this point have been obtained by keeping the non-linearity of the behavior of the dampers. In order to get a linear formulation of the problem, that is possible to solve by using the well-known methods of the dynamics of structures, as we did before for the iterative methods by introducing the equivalent damping ratio, it is shown how the equivalent damping ratio can be evaluated from knowing the value of ε. Operating in this way, once the parameter ε is known, it is quite easy to estimate the equivalent damping ratio and to proceed with a classic linear analysis. In the last section it is shown how the parameter ε could be taken as reference for the evaluation of the convenience of using non-linear dampers instead of linear ones on the basis of the type of external force and the characteristics of the system. CHAPTER 4: MULTI-DEGREE OF FREEDOM SYSTEMS In this chapter the design methods of a elastic-linear mdf system equipped with non-linear fluidviscous dampers are introduced. It has already been shown that, in the sdf systems, the response of the structure can be evaluated through the estimation of the equivalent damping ratio (ξsd) assuming the behavior of the structure elastic-linear. We would to mention that some adjusting coefficients, to be applied to the equivalent damping ratio in order to consider the actual behavior of the structure (that is non-linear), have already been proposed in literature; such coefficients are usually expressed in terms of ductility, but their treatment is over the aims of this thesis and we does not go into further. The method usually proposed in literature is based on energy equivalence: even though this procedure has solid theoretical basis, it must necessary include some iterative process, because the expression of the equivalent damping ratio contains a term of the response. This procedure has been introduced primarily by Ramirez, Constantinou et al. in 2000. This procedure is reported in the first section and it is defined “Iterative Method”. Following the guide lines about sdf systems reported in the previous chapters, it is introduced a procedure for the assessment of the parameter ε in the case of mdf systems. Operating in this way the evaluation of the equivalent damping ratio (ξsd) can be done directly without implementing iterative processes. This procedure is defined “Direct Method” and it is reported in the second section. In the third section the two methods are analyzed by studying 4 cases of two moment-resisting steel frames undergoing real accelerogramms: the response of the system calculated by using the two methods is compared with the numerical response obtained from the software called SAP2000-NL, CSI product. In the last section a procedure to create spectra of the equivalent damping ratio, affected by the parameter ε and the natural period of the system for a fixed value of exponent α, starting from the elasticresponse spectra provided by any international code, is introduced.
Resumo:
The first part of my thesis presents an overview of the different approaches used in the past two decades in the attempt to forecast epileptic seizure on the basis of intracranial and scalp EEG. Past research could reveal some value of linear and nonlinear algorithms to detect EEG features changing over different phases of the epileptic cycle. However, their exact value for seizure prediction, in terms of sensitivity and specificity, is still discussed and has to be evaluated. In particular, the monitored EEG features may fluctuate with the vigilance state and lead to false alarms. Recently, such a dependency on vigilance states has been reported for some seizure prediction methods, suggesting a reduced reliability. An additional factor limiting application and validation of most seizure-prediction techniques is their computational load. For the first time, the reliability of permutation entropy [PE] was verified in seizure prediction on scalp EEG data, contemporarily controlling for its dependency on different vigilance states. PE was recently introduced as an extremely fast and robust complexity measure for chaotic time series and thus suitable for online application even in portable systems. The capability of PE to distinguish between preictal and interictal state has been demonstrated using Receiver Operating Characteristics (ROC) analysis. Correlation analysis was used to assess dependency of PE on vigilance states. Scalp EEG-Data from two right temporal epileptic lobe (RTLE) patients and from one patient with right frontal lobe epilepsy were analysed. The last patient was included only in the correlation analysis, since no datasets including seizures have been available for him. The ROC analysis showed a good separability of interictal and preictal phases for both RTLE patients, suggesting that PE could be sensitive to EEG modifications, not visible on visual inspection, that might occur well in advance respect to the EEG and clinical onset of seizures. However, the simultaneous assessment of the changes in vigilance showed that: a) all seizures occurred in association with the transition of vigilance states; b) PE was sensitive in detecting different vigilance states, independently of seizure occurrences. Due to the limitations of the datasets, these results cannot rule out the capability of PE to detect preictal states. However, the good separability between pre- and interictal phases might depend exclusively on the coincidence of epileptic seizure onset with a transition from a state of low vigilance to a state of increased vigilance. The finding of a dependency of PE on vigilance state is an original finding, not reported in literature, and suggesting the possibility to classify vigilance states by means of PE in an authomatic and objectic way. The second part of my thesis provides the description of a novel behavioral task based on motor imagery skills, firstly introduced (Bruzzo et al. 2007), in order to study mental simulation of biological and non-biological movement in paranoid schizophrenics (PS). Immediately after the presentation of a real movement, participants had to imagine or re-enact the very same movement. By key release and key press respectively, participants had to indicate when they started and ended the mental simulation or the re-enactment, making it feasible to measure the duration of the simulated or re-enacted movements. The proportional error between duration of the re-enacted/simulated movement and the template movement were compared between different conditions, as well as between PS and healthy subjects. Results revealed a double dissociation between the mechanisms of mental simulation involved in biological and non-biologial movement simulation. While for PS were found large errors for simulation of biological movements, while being more acurate than healthy subjects during simulation of non-biological movements. Healthy subjects showed the opposite relationship, making errors during simulation of non-biological movements, but being most accurate during simulation of non-biological movements. However, the good timing precision during re-enactment of the movements in all conditions and in both groups of participants suggests that perception, memory and attention, as well as motor control processes were not affected. Based upon a long history of literature reporting the existence of psychotic episodes in epileptic patients, a longitudinal study, using a slightly modified behavioral paradigm, was carried out with two RTLE patients, one patient with idiopathic generalized epilepsy and one patient with extratemporal lobe epilepsy. Results provide strong evidence for a possibility to predict upcoming seizures in RTLE patients behaviorally. In the last part of the thesis it has been validated a behavioural strategy based on neurobiofeedback training, to voluntarily control seizures and to reduce there frequency. Three epileptic patients were included in this study. The biofeedback was based on monitoring of slow cortical potentials (SCPs) extracted online from scalp EEG. Patients were trained to produce positive shifts of SCPs. After a training phase patients were monitored for 6 months in order to validate the ability of the learned strategy to reduce seizure frequency. Two of the three refractory epileptic patients recruited for this study showed improvements in self-management and reduction of ictal episodes, even six months after the last training session.
Resumo:
This work focuses on magnetohydrodynamic (MHD) mixed convection flow of electrically conducting fluids enclosed in simple 1D and 2D geometries in steady periodic regime. In particular, in Chapter one a short overview is given about the history of MHD, with reference to papers available in literature, and a listing of some of its most common technological applications, whereas Chapter two deals with the analytical formulation of the MHD problem, starting from the fluid dynamic and energy equations and adding the effects of an external imposed magnetic field using the Ohm's law and the definition of the Lorentz force. Moreover a description of the various kinds of boundary conditions is given, with particular emphasis given to their practical realization. Chapter three, four and five describe the solution procedure of mixed convective flows with MHD effects. In all cases a uniform parallel magnetic field is supposed to be present in the whole fluid domain transverse with respect to the velocity field. The steady-periodic regime will be analyzed, where the periodicity is induced by wall temperature boundary conditions, which vary in time with a sinusoidal law. Local balance equations of momentum, energy and charge will be solved analytically and numerically using as parameters either geometrical ratios or material properties. In particular, in Chapter three the solution method for the mixed convective flow in a 1D vertical parallel channel with MHD effects is illustrated. The influence of a transverse magnetic field will be studied in the steady periodic regime induced by an oscillating wall temperature. Analytical and numerical solutions will be provided in terms of velocity and temperature profiles, wall friction factors and average heat fluxes for several values of the governing parameters. In Chapter four the 2D problem of the mixed convective flow in a vertical round pipe with MHD effects is analyzed. Again, a transverse magnetic field influences the steady periodic regime induced by the oscillating wall temperature of the wall. A numerical solution is presented, obtained using a finite element approach, and as a result velocity and temperature profiles, wall friction factors and average heat fluxes are derived for several values of the Hartmann and Prandtl numbers. In Chapter five the 2D problem of the mixed convective flow in a vertical rectangular duct with MHD effects is discussed. As seen in the previous chapters, a transverse magnetic field influences the steady periodic regime induced by the oscillating wall temperature of the four walls. The numerical solution obtained using a finite element approach is presented, and a collection of results, including velocity and temperature profiles, wall friction factors and average heat fluxes, is provided for several values of, among other parameters, the duct aspect ratio. A comparison with analytical solutions is also provided, as a proof of the validity of the numerical method. Chapter six is the concluding chapter, where some reflections on the MHD effects on mixed convection flow will be made, in agreement with the experience and the results gathered in the analyses presented in the previous chapters. In the appendices special auxiliary functions and FORTRAN program listings are reported, to support the formulations used in the solution chapters.
Resumo:
La studio della distribuzione spaziale e temporale della associazioni a foraminiferi planctonici, campionati in zone con differente regime idrografico, ha permesso di comprendere che molte specie possono essere diagnostiche della presenza di diverse masse d’acqua superficiali e sottosuperficiali e di diversi regimi di nutrienti nelle acque oceaniche. Parte di questo lavoro di tesi si basa sullo studio delle associazioni a foraminiferi planctonici attualmente viventi nel Settore Pacifico dell’Oceano Meridionale (Mare di Ross e Zona del Fronte Polare) e nel Mare Mediterraneo (Mar Tirreno Meridionale). L’obiettivo di questo studio è quello di comprendere i fattori (temperatura, salinità, nutrienti etc.) che determinano la distribuzione attuale delle diverse specie al fine di valutarne il valore di “indicatori” (proxies) utili alla ricostruzione degli scenari paleoclimatici e paleoceanografici succedutisi in queste aree. I risultati documentano che la distribuzione delle diverse specie, il numero di individui e le variazioni nella morfologia di alcuni taxa sono correlate alle caratteristiche chimico-fisiche della colonna e alla disponibilità di nutrienti e di clorofilla. La seconda parte del lavoro di tesi ha previsto l’analisi degli isotopi stabili dell’ossigeno e del rapporto Mg/Ca in gusci di N. pachyderma (sin) prelevati da pescate di micro zooplancton (per tarare l’equazione di paleo temperatura) da un box core e da una carota provenienti dalla zona del Fronte Polare (Oceano Pacifico meridionale), al fine di ricostruire le variazioni di temperatura negli ultimi 13 ka e durante la Mid-Pleistocene Revolution. Le temperature, dedotte tramite i valori degli isotopi stabili dell’ossigeno, sono coerenti con le temperature attuali documentate in questa zona e il trend di temperatura è paragonabile a quelli riportati in letteratura anche per eventi climatici come lo Younger Dryas e il mid-Holocene Optimum. I valori del rapporto Mg/Ca misurato tramite due diverse tecniche di analisi (laser ablation e analisi in soluzione) sono risultati sempre molto più alti dei valori riportati in letteratura per la stessa specie. La laser ablation sembra carente dal punto di vista del cleaning del campione e da questo studio emerge che le due tecniche non sono comparabili e che non possono essere usate indifferentemente sullo stesso campione. Per quanto riguarda l’analisi dei campioni in soluzione è stato migliorato il protocollo di cleaning per il trattamento di campioni antartici, che ha permesso di ottenere valori veritieri e utili ai fini delle ricostruzioni di paleotemperatura. Tuttavia, rimane verosimile l’ipotesi che in ambienti particolari come questo, con salinità e temperature molto basse, l’incorporazione del Mg all’interno del guscio risenta delle condizioni particolari e che non segua quindi la relazione esponenziale con la temperatura ampiamente dimostrata ad altre latitudini.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Leberâs hereditary optic neuropathy (LHON) is a mitochondrial disease characterized by a rapid loss of central vision and optic atrophy, due to the selective degeneration of retinal ganglion cells. The age of onset is around 20, and the degenerative process is fast and usually the second eye becomes affected in weeks or months. Even if this pathology is well known and has been well characterized, there are still open questions on its pathophysiology, such as the male prevalence, the incomplete penetrance and the tissue selectivity. This maternally inherited disease is caused by mutations in mitochondrial encoded genes of NADH ubiquinone oxidoreductase (complex I) of the respiratory chain. The 90% of LHON cases are caused by one of the three common mitochondrial DNA mutations (11778/ND4, 14484/ND6 and 3460/ND1) and the remaining 10% is caused by rare pathogenic mutations, reported in literature in one or few families. Moreover, there is also a small subset of patients reported with new putative pathogenic nucleotide changes, which awaits to be confirmed. We here clarify some molecular aspects of LHON, mainly the incomplete penetrance and the role of rare mtDNA mutations or variants on LHON expression, and attempt a possible therapeutic approach using the cybrids cell model. We generated novel structural models for mitochondrial encoded complex I subunits and a conservation analysis and pathogenicity prediction have been carried out for LHON reported mutations. This in-silico approach allowed us to locate LHON pathogenic mutations in defined and conserved protein domains and can be a useful tool in the analysis of novel mtDNA variants with unclear pathogenic/functional role. Four rare LHON pathogenic mutations have been identified, confirming that the ND1 and ND6 genes are mutational hot spots for LHON. All mutations were previously described at least once and we validated their pathogenic role, suggesting the need for their screening in LHON diagnostic protocols. Two novel mtDNA variants with a possible pathogenic role have been also identified in two independent branches of a large pedigree. Functional studies are necessary to define their contribution to LHON in this family. It also been demonstrated that the combination of mtDNA rare polymorphic variants is relevant in determining the maternal recurrence of myoclonus in unrelated LHON pedigrees. Thus, we suggest that particular mtDNA backgrounds and /or the presence of specific rare mutations may increase the pathogenic potential of the primary LHON mutations, thereby giving rise to the extraocular clinical features characteristic of the LHON âplusâ phenotype. We identified the first molecular parameter that clearly discriminates LHON affected individuals from asymptomatic carriers, the mtDNA copy number. This provides a valuable mechanism for future investigations on variable penetrance in LHON. However, the increased mtDNA content in LHON individuals was not correlated to the functional polymorphism G1444A of PGC-1 alpha, the master regulator of mitochondrial biogenesis, but may be due to gene expression of genes involved in this signaling pathway, such as PGC-1 alpha/beta and Tfam. Future studies will be necessary to identify the biochemical effects of rare pathogenic mutations and to validate the novel candidate mutations here described, in terms of cellular bioenergetic characterization of these variants. Moreover, we were not able to induce mitochondrial biogenesis in cybrids cell lines using bezafibrate. However, other cell line models are available, such as fibroblasts harboring LHON mutations, or other approaches can be used to trigger the mitochondrial biogenesis.
Computer simulation of ordering and dynamics in liquid crystals in the bulk and close to the surface
Resumo:
The aim of this PhD thesis is to investigate the orientational and dynamical properties of liquid crystalline systems, at molecular level and using atomistic computer simulations, to reach a better understanding of material behavior from a microscopic point view. In perspective this should allow to clarify the relation between the micro and macroscopic properties with the objective of predicting or confirming experimental results on these systems. In this context, we developed four different lines of work in the thesis. The first one concerns the orientational order and alignment mechanism of rigid solutes of small dimensions dissolved in a nematic phase formed by the 4-pentyl,4 cyanobiphenyl (5CB) nematic liquid crystal. The orientational distribution of solutes have been obtained with Molecular Dynamics Simulation (MD) and have been compared with experimental data reported in literature. we have also verified the agreement between order parameters and dipolar coupling values measured in NMR experiments. The MD determined effective orientational potentials have been compared with the predictions of MaierSaupe and Surface tensor models. The second line concerns the development of a correct parametrization able to reproduce the phase transition properties of a prototype of the oligothiophene semiconductor family: sexithiophene (T6). T6 forms two crystalline polymorphs largely studied, and possesses liquid crystalline phases still not well characterized, From simulations we detected a phase transition from crystal to liquid crystal at about 580 K, in agreement with available experiments, and in particular we found two LC phases, smectic and nematic. The crystalsmectic transition is associated to a relevant density variation and to strong conformational changes of T6, namely the molecules in the liquid crystal phase easily assume a bent shape, deviating from the planar structure typical of the crystal. The third line explores a new approach for calculating the viscosity in a nematic through a virtual exper- iment resembling the classical falling sphere experiment. The falling sphere is replaced by an hydrogenated silicon nanoparticle of spherical shape suspended in 5CB, and gravity effects are replaced by a constant force applied to the nanoparticle in a selected direction. Once the nanoparticle reaches a constant velocity, the viscosity of the medium can be evaluated using Stokes' law. With this method we successfully reproduced experimental viscosities and viscosity anisotropy for the solvent 5CB. The last line deals with the study of order induction on nematic molecules by an hydrogenated silicon surface. Gaining predicting power for the anchoring behavior of liquid crystals at surfaces will be a very desirable capability, as many properties related to devices depend on molecular organization close to surfaces. Here we studied, by means of atomistic MD simulations, the flat interface between an hydrogenated (001) silicon surface in contact with a sample of 5CB molecules. We found a planar anchoring of the first layers of 5CB where surface interactions are dominating with respect to the mesogen intermolecular interactions. We also analyzed the interface 5CBvacuum, finding a homeotropic orientation of the nematic at this interface.
Resumo:
In this thesis, we present our work about some generalisations of ideas, techniques and physical interpretations typical for integrable models to one of the most outstanding advances in theoretical physics of nowadays: the AdS/CFT correspondences. We have undertaken the problem of testing this conjectured duality under various points of view, but with a clear starting point - the integrability - and with a clear ambitious task in mind: to study the finite-size effects in the energy spectrum of certain string solutions on a side and in the anomalous dimensions of the gauge theory on the other. Of course, the final desire woul be the exact comparison between these two faces of the gauge/string duality. In few words, the original part of this work consists in application of well known integrability technologies, in large parte borrowed by the study of relativistic (1+1)-dimensional integrable quantum field theories, to the highly non-relativisic and much complicated case of the thoeries involved in the recent conjectures of AdS5/CFT4 and AdS4/CFT3 corrspondences. In details, exploiting the spin chain nature of the dilatation operator of N = 4 Super-Yang-Mills theory, we concentrated our attention on one of the most important sector, namely the SL(2) sector - which is also very intersting for the QCD understanding - by formulating a new type of nonlinear integral equation (NLIE) based on a previously guessed asymptotic Bethe Ansatz. The solutions of this Bethe Ansatz are characterised by the length L of the correspondent spin chain and by the number s of its excitations. A NLIE allows one, at least in principle, to make analytical and numerical calculations for arbitrary values of these parameters. The results have been rather exciting. In the important regime of high Lorentz spin, the NLIE clarifies how it reduces to a linear integral equations which governs the subleading order in s, o(s0). This also holds in the regime with L ! 1, L/ ln s finite (long operators case). This region of parameters has been particularly investigated in literature especially because of an intriguing limit into the O(6) sigma model defined on the string side. One of the most powerful methods to keep under control the finite-size spectrum of an integrable relativistic theory is the so called thermodynamic Bethe Ansatz (TBA). We proposed a highly non-trivial generalisation of this technique to the non-relativistic case of AdS5/CFT4 and made the first steps in order to determine its full spectrum - of energies for the AdS side, of anomalous dimensions for the CFT one - at any values of the coupling constant and of the size. At the leading order in the size parameter, the calculation of the finite-size corrections is much simpler and does not necessitate the TBA. It consists in deriving for a nonrelativistc case a method, invented for the first time by L¨uscher to compute the finite-size effects on the mass spectrum of relativisic theories. So, we have formulated a new version of this approach to adapt it to the case of recently found classical string solutions on AdS4 × CP3, inside the new conjecture of an AdS4/CFT3 correspondence. Our results in part confirm the string and algebraic curve calculations, in part are completely new and then could be better understood by the rapidly evolving developments of this extremely exciting research field.
Resumo:
The aim of the thesi is to formulate a suitable Item Response Theory (IRT) based model to measure HRQoL (as latent variable) using a mixed responses questionnaire and relaxing the hypothesis of normal distributed latent variable. The new model is a combination of two models already presented in literature, that is, a latent trait model for mixed responses and an IRT model for Skew Normal latent variable. It is developed in a Bayesian framework, a Markov chain Monte Carlo procedure is used to generate samples of the posterior distribution of the parameters of interest. The proposed model is test on a questionnaire composed by 5 discrete items and one continuous to measure HRQoL in children, the EQ-5D-Y questionnaire. A large sample of children collected in the schools was used. In comparison with a model for only discrete responses and a model for mixed responses and normal latent variable, the new model has better performances, in term of deviance information criterion (DIC), chain convergences times and precision of the estimates.
Resumo:
Actual trends in software development are pushing the need to face a multiplicity of diverse activities and interaction styles characterizing complex and distributed application domains, in such a way that the resulting dynamics exhibits some grade of order, i.e. in terms of evolution of the system and desired equilibrium. Autonomous agents and Multiagent Systems are argued in literature as one of the most immediate approaches for describing such a kind of challenges. Actually, agent research seems to converge towards the definition of renewed abstraction tools aimed at better capturing the new demands of open systems. Besides agents, which are assumed as autonomous entities purposing a series of design objectives, Multiagent Systems account new notions as first-class entities, aimed, above all, at modeling institutional/organizational entities, placed for normative regulation, interaction and teamwork management, as well as environmental entities, placed as resources to further support and regulate agent work. The starting point of this thesis is recognizing that both organizations and environments can be rooted in a unifying perspective. Whereas recent research in agent systems seems to account a set of diverse approaches to specifically face with at least one aspect within the above mentioned, this work aims at proposing a unifying approach where both agents and their organizations can be straightforwardly situated in properly designed working environments. In this line, this work pursues reconciliation of environments with sociality, social interaction with environment based interaction, environmental resources with organizational functionalities with the aim to smoothly integrate the various aspects of complex and situated organizations in a coherent programming approach. Rooted in Agents and Artifacts (A&A) meta-model, which has been recently introduced both in the context of agent oriented software engineering and programming, the thesis promotes the notion of Embodied Organizations, characterized by computational infrastructures attaining a seamless integration between agents, organizations and environmental entities.
Resumo:
Pharmaceuticals are useful tools to prevent and treat human and animal diseases. Following administration, a significant fraction of pharmaceuticals is excreted unaltered into faeces and urine and may enter the aquatic ecosystem and agricultural soil through irrigation with recycled water, constituting a significant source of emerging contaminants into the environment. Understanding major factors influencing their environmental fate is consequently needed to value the risk, reduce contamination, and set up bioremediation technologies. The antiviral drug Tamiflu (oseltamivir carboxylate, OC) has received recent attention due to the potential use as a first line defence against H5N1 and H1N1 influenza viruses. Research has shown that OC is not removed during conventional wastewater treatments, thus having the potential to enter surface water bodies. A series of laboratory experiments investigated the fate and the removal of OC in surface water systems in Italy and Japan and in a municipal wastewater treatment plant. A preliminary laboratory study investigated the persistence of the active antiviral drug in water samples from an irrigation canal in northern Italy (Canale Emiliano Romagnolo). After an initial rapid decrease, OC concentration slowly decreased during the remaining incubation period. Approximately 65% of the initial OC amount remained in water at the end of the 36-day incubation period. A negligible amount of OC was lost both from sterilized water and from sterilized water/sediment samples, suggesting a significant role of microbial degradation. Stimulating microbial processes by the addition of sediments resulted in reduced OC persistence. Presence of OC (1.5 μg mL-1) did not significantly affect the metabolic potential of the water microbial population, that was estimated by glyphosate and metolachlor mineralization. In contrast, OC caused an initial transient decrease in the size of the indigenous microbial population of water samples. A second laboratory study focused on basic processes governing the environmental fate of OC in surface water from two contrasting aquatic ecosystems of northern Italy, the River Po and the Venice Lagoon. Results of this study confirmed the potential of OC to persist in surface water. However, the addition of 5% of sediments resulted in rapid OC degradation. The estimated half-life of OC in water/sediment of the River Po was 15 days. After three weeks of incubation at 20 °C, more than 8% of 14C-OC evolved as 14CO2 from water/sediment samples of the River Po and Venice Lagoon. OC was moderately retained onto coarse sediments from the two sites. In water/sediment samples of the River Po and Venice Lagoon treated with 14C-OC, more than 30% of the 14C-residues remained water-extractable after three weeks of incubation. The low affinity of OC to sediments suggests that the presence of sediments would not reduce its bioavailability to microbial degradation. Another series of laboratory experiments investigated the fate and the removal of OC in two surface water ecosystems of Japan and in the municipal wastewater treatment plant of the city of Bologna, in Northern Italy. The persistence of OC in surface water ranged from non-detectable degradation to a half-life of 53 days. After 40 days, less than 3% of radiolabeled OC evolved as 14CO2. The presence of sediments (5%) led to a significant increase of OC degradation and of mineralization rates. A more intense mineralization was observed in samples of the wastewater treatment plant when applying a long incubation period (40 days). More precisely, 76% and 37% of the initial radioactivity applied as 14C-OC was recovered as 14CO2 from samples of the biological tank and effluent water, respectively. Two bacterial strains growing on OC as sole carbon source were isolated and used for its removal from synthetic medium and environmental samples, including surface water and wastewater. Inoculation of water and wastewater samples with the two OC-degrading strains showed that mineralization of OC was significantly higher in both inoculated water and wastewater, than in uninoculated controls. Denaturing gradient gel electrophoresis and quantitative PCR analysis showed that OC would not affect the microbial population of surface water and wastewater. The capacity of the ligninolytic fungus Phanerochaete chrysosporium to degrade a wide variety of environmentally persistent xenobiotics has been largely reported in literature. In a series of laboratory experiments, the efficiency of a formulation using P. chrysosporium was evaluated for the removal of selected pharmaceuticals from wastewater samples. Addition of the fungus to samples of the wastewater treatment plant of Bologna significantly increased (P < 0.05) the removal of OC and three antibiotics, erythromycin, sulfamethoxazole, and ciprofloxacin. Similar effects were also observed in effluent water. OC was the most persistent of the four pharmaceuticals. After 30 days of incubation, approximately two times more OC was removed in bioremediated samples than in controls. The highest removal efficiency of the formulation was observed with the antibiotic ciprofloxacin. The studies included environmental aspects of soil contamination with two emerging veterinary contaminants, such as doramectin and oxibendazole, wich are common parasitic treatments in cattle farms.
Resumo:
The aim of the research is to study the capacity for self-evaluation of 271 primary school student undergoing tests involving mathematics and formal reasoning. Subjects were asked to estimate the number of correct answers and subsequently to compare their performance with that of their peers. The results demonstrate that all the subjects in all tests showed a significant negative relationship among real score and self - evaluation indices. Analyzing comparative assessments, the results reported in literature by Kruger and Dunning were confirmed: poor performers tend to significantly overestimate their own performance whilst top performers tend to underestimate it. This can be interpreted as a demonstration that the accuracy of comparative self-evaluations depends on a number of variables: cognitive and metacognitive factors and aspects associated with self-representation. To verify these aspects we examined bias in self evaluation from an attributional perspective too. Our conclusion is that cognitive and metacognitive processes work as “submerged” in highly subjective representations, allowing dynamics related to safeguarding the image one has of oneself to play a role.