810 resultados para Switzerland’s representation in literature
Resumo:
Based on the concepts of sustainability and knowledge management, this article seeks to identify points of contact between the two themes through an exploratory study of existing literature. The first objective is to find, in international literature, the largest number of papers jointly related to the theme of knowledge management and sustainability. In these documents, the authors looked at the kind of relationship existing between the two themes and what the benefits introduced in organizations are. Based on an ergonomic point of view, the second objective of this article is to analyze the role of the worker (whether at the strategic or operational level) and his importance in this context. The results demonstrate that there is very little literature that addresses the two themes together. The few papers found, however, can be said to show the many advantages of introducing sustainability policies supported by adequate knowledge management. Very little has been studied with regards to the role of workers, which could be interpreted as meaning that little importance is given to the proactive role they may play. On the other hand, there is a high potential for future research in these areas, based on the high level of consideration of workers in knowledge management and sustainability literature, as well as in literature in the areas of ergonomics and sociology.
Resumo:
Objective: To observe the behavior of the plotted vectors on the RXc (R - resistance - and Xc - reactance corrected for body height/length) graph through bioelectrical impedance analysis (BIVA) and phase angle (PA) values in stable premature infants, considering the hypothesis that preterm infants present vector behavior on BIVA suggestive of less total body water and soft tissues, compared to reference data for term infants. Methods: Cross-sectional study, including preterm neonates of both genders, in-patients admitted to an intermediate care unit at a tertiary care hospital. Data on delivery, diet and bioelectrical impedance (800 mA, 50 kHz) were collected. The graphs and vector analysis were performed with the BIVA software. Results: A total of 108 preterm infants were studied, separated according to age (< 7 days and >= 7 days). Most of the premature babies were without the normal range (above the 95% tolerance intervals) existing in literature for term newborn infants and there was a tendency to dispersion of the points in the upper right quadrant, RXc plan. The PA was 4.92 degrees (+/- 2.18) for newborns < 7 days and 4.34 degrees (+/- 2.37) for newborns >= 7 days. Conclusion: Premature infants behave similarly in terms of BIVA and most of them have less absolute body water, presenting less fat free mass and fat mass in absolute values, compared to term newborn infants.
Resumo:
The application of immunobiologics for the rheumatoid arthritis treatment may present as a rare complication the development of inflammatory myopathy. Until this moment, there have been described in literature only seven cases of inhibitors of tumor necrosis factor induced-myositis. In this paper, we report the case of the patient with 39 years-old with eight years of arthritis rheumatoid and that due to refractory to various immunosuppressive drugs, the adalimumab was introduced, and evolved to dermatomyositis status.
Resumo:
Fabry disease (FD) is an X-linked inborn error of glycosphingolipid catabolism that results from mutations in the alpha-galactosidase A (GLA) gene. Evaluating the enzymatic activity in male individuals usually performs the diagnosis of the disease, but in female carriers the diagnosis based only on enzyme assays is often inconclusive. In this work, we analyzed 568 individuals from 102 families with suspect of FD. Overall, 51 families presented 38 alterations in the GLA gene, among which 19 were not previously reported in literature. The alterations included 17 missense mutations, 7 nonsense mutations, 7 deletions, 6 insertions and 1 in the splice site. Six alterations (R112C, R118C, R220X, R227X, R342Q and R356W) occurred at CpG dinucleotides. Five mutations not previously described in the literature (A156D, K237X, A292V, I317S, c.1177_1178insG) were correlated with low GLA enzyme activity and with prediction of molecular damages. From the 13 deletions and insertions, 7 occurred in exons 6 or 7 (54%) and 11 led to the formation of a stop codon. The present study highlights the detection of new genomic alterations in the GLA gene in the Brazilian population, facilitating the selection of patients for recombinant enzyme-replacement trials and offering the possibility to perform prenatal diagnosis. Journal of Human Genetics (2012) 57, 347-351; doi:10.1038/jhg.2012.32; published online 3 May 2012
Resumo:
Objective: To evaluate the systemic blood pressure (BP) during daytime and nighttime in children with sleep breathing disorders (SBD) and compare parameters of BP in children with diagnosis of obstructive sleep apnea syndrome (OSA) to those one with primary snoring (PS). Methods: Children, both genders, aged from 8 to 12 years, with symptoms of SBD realized an overnight polysomnography followed by a 24 h recording of ambulatory BP. Results: All subjects presented with a history of snoring 7 nights per week. Children who have apnea/hipoapnea index >= four or a apnea index >= one presented a mean BP of 93 +/- 7 mmHg and 85 +/- 9 mmHg diurnal and nocturnal respectively whereas children who have a apnea/hipoapnea < four or a apnea index < one presented 90 +/- 7 mmHg and 77 +/- 2 mmHg. Eight children out of fourteen, from OSA group, lost the physiologic nocturnal dipping of the blood pressure. Among OSA children 57% were considered non-dippers. Two (16%) have presented absence of nocturnal dipping among children with primary snoring. The possibility of OSA children loosing physiologic blood pressure dipping was 6.66 higher than the possibilities of patients from PS group. Discussion: Our results indicate that children with sleep apnea syndrome exhibit a higher 24 h blood pressure when compared with those of primary snoring in form of decreased degree of nocturnal dipping and increased levels of diastolic and mean blood pressure, according to previous studies in literature. OSA in children seems to be associated to the development of hypertension or other cardiovascular disease. (C) 2012 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Primary stability of stems in cementless total hip replacements is recognized to play a critical role for long-term survival and thus for the success of the overall surgical procedure. In Literature, several studies addressed this important issue. Different approaches have been explored aiming to evaluate the extent of stability achieved during surgery. Some of these are in-vitro protocols while other tools are coinceived for the post-operative assessment of prosthesis migration relative to the host bone. In vitro protocols reported in the literature are not exportable to the operating room. Anyway most of them show a good overall accuracy. The RSA, EBRA and the radiographic analysis are currently used to check the healing process of the implanted femur at different follow-ups, evaluating implant migration, occurance of bone resorption or osteolysis at the interface. These methods are important for follow up and clinical study but do not assist the surgeon during implantation. At the time I started my Ph.D Study in Bioengineering, only one study had been undertaken to measure stability intra-operatively. No follow-up was presented to describe further results obtained with that device. In this scenario, it was believed that an instrument that could measure intra-operatively the stability achieved by an implanted stem would consistently improve the rate of success. This instrument should be accurate and should give to the surgeon during implantation a quick answer concerning the stability of the implanted stem. With this aim, an intra-operative device was designed, developed and validated. The device is meant to help the surgeon to decide how much to press-fit the implant. It is essentially made of a torsional load cell, able to measure the extent of torque applied by the surgeon to test primary stability, an angular sensor that measure the relative angular displacement between stem and femur, a rigid connector that enable connecting the device to the stem, and all the electronics for signals conditioning. The device was successfully validated in-vitro, showing a good overall accuracy in discriminating stable from unstable implants. Repeatability tests showed that the device was reliable. A calibration procedure was then performed in order to convert the angular readout into a linear displacement measurement, which is an information clinically relevant and simple to read in real-time by the surgeon. The second study reported in my thesis, concerns the evaluation of the possibility to have predictive information regarding the primary stability of a cementless stem, by measuring the micromotion of the last rasp used by the surgeon to prepare the femoral canal. This information would be really useful to the surgeon, who could check prior to the implantation process if the planned stem size can achieve a sufficient degree of primary stability, under optimal press fitting conditions. An intra-operative tool was developed to this aim. It was derived from a previously validated device, which was adapted for the specific purpose. The device is able to measure the relative micromotion between the femur and the rasp, when a torsional load is applied. An in-vitro protocol was developed and validated on both composite and cadaveric specimens. High correlation was observed between one of the parameters extracted form the acquisitions made on the rasp and the stability of the corresponding stem, when optimally press-fitted by the surgeon. After tuning in-vitro the protocol as in a closed loop, verification was made on two hip patients, confirming the results obtained in-vitro and highlighting the independence of the rasp indicator from the bone quality, anatomy and preserving conditions of the tested specimens, and from the sharpening of the rasp blades. The third study is related to an approach that have been recently explored in the orthopaedic community, but that was already in use in other scientific fields. It is based on the vibration analysis technique. This method has been successfully used to investigate the mechanical properties of the bone and its application to evaluate the extent of fixation of dental implants has been explored, even if its validity in this field is still under discussion. Several studies have been published recently on the stability assessment of hip implants by vibration analysis. The aim of the reported study was to develop and validate a prototype device based on the vibration analysis technique to measure intra-operatively the extent of implant stability. The expected advantages of a vibration-based device are easier clinical use, smaller dimensions and minor overall cost with respect to other devices based on direct micromotion measurement. The prototype developed consists of a piezoelectric exciter connected to the stem and an accelerometer attached to the femur. Preliminary tests were performed on four composite femurs implanted with a conventional stem. The results showed that the input signal was repeatable and the output could be recorded accurately. The fourth study concerns the application of the device based on the vibration analysis technique to several cases, considering both composite and cadaveric specimens. Different degrees of bone quality were tested, as well as different femur anatomies and several levels of press-fitting were considered. The aim of the study was to verify if it is possible to discriminate between stable and quasi-stable implants, because this is the most challenging detection for the surgeon in the operation room. Moreover, it was possible to validate the measurement protocol by comparing the results of the acquisitions made with the vibration-based tool to two reference measurements made by means of a validated technique, and a validated device. The results highlighted that the most sensitive parameter to stability is the shift in resonance frequency of the stem-bone system, showing high correlation with residual micromotion on all the tested specimens. Thus, it seems possible to discriminate between many levels of stability, from the grossly loosened implant, through the quasi-stable implants, to the definitely stable one. Finally, an additional study was performed on a different type of hip prosthesis, which has recently gained great interest thus becoming fairly popular in some countries in the last few years: the hip resurfacing prosthesis. The study was motivated by the following rationale: although bone-prosthesis micromotion is known to influence the stability of total hip replacement, its effect on the outcome of resurfacing implants has not been investigated in-vitro yet, but only clinically. Thus the work was aimed at verifying if it was possible to apply to the resurfacing prosthesis one of the intraoperative devices just validated for the measurement of the micromotion in the resurfacing implants. To do that, a preliminary study was performed in order to evaluate the extent of migration and the typical elastic movement for an epiphyseal prosthesis. An in-vitro procedure was developed to measure micromotions of resurfacing implants. This included a set of in-vitro loading scenarios that covers the range of directions covered by hip resultant forces in the most typical motor-tasks. The applicability of the protocol was assessed on two different commercial designs and on different head sizes. The repeatability and reproducibility were excellent (comparable to the best previously published protocols for standard cemented hip stems). Results showed that the procedure is accurate enough to detect micromotions of the order of few microns. The protocol proposed was thus completely validated. The results of the study demonstrated that the application of an intra-operative device to the resurfacing implants is not necessary, as the typical micromovement associated to this type of prosthesis could be considered negligible and thus not critical for the stabilization process. Concluding, four intra-operative tools have been developed and fully validated during these three years of research activity. The use in the clinical setting was tested for one of the devices, which could be used right now by the surgeon to evaluate the degree of stability achieved through the press-fitting procedure. The tool adapted to be used on the rasp was a good predictor of the stability of the stem. Thus it could be useful for the surgeon while checking if the pre-operative planning was correct. The device based on the vibration technique showed great accuracy, small dimensions, and thus has a great potential to become an instrument appreciated by the surgeon. It still need a clinical evaluation, and must be industrialized as well. The in-vitro tool worked very well, and can be applied for assessing resurfacing implants pre-clinically.
Resumo:
The main problem connected to cone beam computed tomography (CT) systems for industrial applications employing 450 kV X-ray tubes is the high amount of scattered radiation which is added to the primary radiation (signal). This stray radiation leads to a significant degradation of the image quality. A better understanding of the scattering and methods to reduce its effects are therefore necessary to improve the image quality. Several studies have been carried out in the medical field at lower energies, whereas studies in industrial CT, especially for energies up to 450 kV, are lacking. Moreover, the studies reported in literature do not consider the scattered radiation generated by the CT system structure and the walls of the X-ray room (environmental scatter). In order to investigate the scattering on CT projections a GEANT4-based Monte Carlo (MC) model was developed. The model, which has been validated against experimental data, has enabled the calculation of the scattering including the environmental scatter, the optimization of an anti-scatter grid suitable for the CT system, and the optimization of the hardware components of the CT system. The investigation of multiple scattering in the CT projections showed that its contribution is 2.3 times the one of primary radiation for certain objects. The results of the environmental scatter showed that it is the major component of the scattering for aluminum box objects of front size 70 x 70 mm2 and that it strongly depends on the thickness of the object and therefore on the projection. For that reason, its correction is one of the key factors for achieving high quality images. The anti-scatter grid optimized by means of the developed MC model was found to reduce the scatter-toprimary ratio in the reconstructed images by 20 %. The object and environmental scatter calculated by means of the simulation were used to improve the scatter correction algorithm which could be patented by Empa. The results showed that the cupping effect in the corrected image is strongly reduced. The developed CT simulation is a powerful tool to optimize the design of the CT system and to evaluate the contribution of the scattered radiation to the image. Besides, it has offered a basis for a new scatter correction approach by which it has been possible to achieve images with the same spatial resolution as state-of-the-art well collimated fan-beam CT with a gain in the reconstruction time of a factor 10. This result has a high economic impact in non-destructive testing and evaluation, and reverse engineering.
Resumo:
CHAPTER 1:FLUID-VISCOUS DAMPERS In this chapter the fluid-viscous dampers are introduced. The first section is focused on the technical characteristics of these devices, their mechanical behavior and the latest evolution of the technology whose they are equipped. In the second section we report the definitions and the guide lines about the design of these devices included in some international codes. In the third section the results of some experimental tests carried out by some authors on the response of these devices to external forces are discussed. On this purpose we report some technical schedules that are usually enclosed to the devices now available on the international market. In the third section we show also some analytic models proposed by various authors, which are able to describe efficiently the physical behavior of the fluid-viscous dampers. In the last section we propose some cases of application of these devices on existing structures and on new-construction structures. We show also some cases in which these devices have been revealed good for aims that lies outside the reduction of seismic actions on the structures. CHAPTER 2:DESIGN METHODS PROPOSED IN LITERATURE In this chapter the more widespread design methods proposed in literature for structures equipped by fluid-viscous dampers are introduced. In the first part the response of sdf systems in the case of harmonic external force is studied, in the last part the response in the case of random external force is discussed. In the first section the equations of motion in the case of an elastic-linear sdf system equipped with a non-linear fluid-viscous damper undergoing a harmonic force are introduced. This differential problem is analytically quite complex and it’s not possible to be solved in a closed form. Therefore some authors have proposed approximate solution methods. The more widespread methods are based on equivalence principles between a non-linear device and an equivalent linear one. Operating in this way it is possible to define an equivalent damping ratio and the problem becomes linear; the solution of the equivalent problem is well-known. In the following section two techniques of linearization, proposed by some authors in literature, are described: the first technique is based on the equivalence of the energy dissipated by the two devices and the second one is based on the equivalence of power consumption. After that we compare these two techniques by studying the response of a sdf system undergoing a harmonic force. By introducing the equivalent damping ratio we can write the equation of motion of the non-linear differential problem in an implicit form, by dividing, as usual, for the mass of the system. In this way, we get a reduction of the number of variables, by introducing the natural frequency of the system. The equation of motion written in this form has two important properties: the response is linear dependent on the amplitude of the external force and the response is dependent on the ratio of the frequency of the external harmonic force and the natural frequency of the system only, and not on their single values. All these considerations, in the last section, are extended to the case of a random external force. CHAPTER 3: DESIGN METHOD PROPOSED In this chapter the theoretical basis of the design method proposed are introduced. The need to propose a new design method for structures equipped with fluid-viscous dampers arises from the observation that the methods reported in literature are always iterative, because the response affects some parameters included in the equation of motion (such as the equivalent damping ratio). In the first section the dimensionless parameterε is introduced. This parameter has been obtained from the definition of equivalent damping ratio. The implicit form of the equation of motion is written by introducing the parameter ε, instead of the equivalent damping ratio. This new implicit equation of motions has not any terms affected by the response, so that once ε is known the response can be evaluated directly. In the second section it is discussed how the parameter ε affects some characteristics of the response: drift, velocity and base shear. All the results described till this point have been obtained by keeping the non-linearity of the behavior of the dampers. In order to get a linear formulation of the problem, that is possible to solve by using the well-known methods of the dynamics of structures, as we did before for the iterative methods by introducing the equivalent damping ratio, it is shown how the equivalent damping ratio can be evaluated from knowing the value of ε. Operating in this way, once the parameter ε is known, it is quite easy to estimate the equivalent damping ratio and to proceed with a classic linear analysis. In the last section it is shown how the parameter ε could be taken as reference for the evaluation of the convenience of using non-linear dampers instead of linear ones on the basis of the type of external force and the characteristics of the system. CHAPTER 4: MULTI-DEGREE OF FREEDOM SYSTEMS In this chapter the design methods of a elastic-linear mdf system equipped with non-linear fluidviscous dampers are introduced. It has already been shown that, in the sdf systems, the response of the structure can be evaluated through the estimation of the equivalent damping ratio (ξsd) assuming the behavior of the structure elastic-linear. We would to mention that some adjusting coefficients, to be applied to the equivalent damping ratio in order to consider the actual behavior of the structure (that is non-linear), have already been proposed in literature; such coefficients are usually expressed in terms of ductility, but their treatment is over the aims of this thesis and we does not go into further. The method usually proposed in literature is based on energy equivalence: even though this procedure has solid theoretical basis, it must necessary include some iterative process, because the expression of the equivalent damping ratio contains a term of the response. This procedure has been introduced primarily by Ramirez, Constantinou et al. in 2000. This procedure is reported in the first section and it is defined “Iterative Method”. Following the guide lines about sdf systems reported in the previous chapters, it is introduced a procedure for the assessment of the parameter ε in the case of mdf systems. Operating in this way the evaluation of the equivalent damping ratio (ξsd) can be done directly without implementing iterative processes. This procedure is defined “Direct Method” and it is reported in the second section. In the third section the two methods are analyzed by studying 4 cases of two moment-resisting steel frames undergoing real accelerogramms: the response of the system calculated by using the two methods is compared with the numerical response obtained from the software called SAP2000-NL, CSI product. In the last section a procedure to create spectra of the equivalent damping ratio, affected by the parameter ε and the natural period of the system for a fixed value of exponent α, starting from the elasticresponse spectra provided by any international code, is introduced.
Resumo:
The first part of my thesis presents an overview of the different approaches used in the past two decades in the attempt to forecast epileptic seizure on the basis of intracranial and scalp EEG. Past research could reveal some value of linear and nonlinear algorithms to detect EEG features changing over different phases of the epileptic cycle. However, their exact value for seizure prediction, in terms of sensitivity and specificity, is still discussed and has to be evaluated. In particular, the monitored EEG features may fluctuate with the vigilance state and lead to false alarms. Recently, such a dependency on vigilance states has been reported for some seizure prediction methods, suggesting a reduced reliability. An additional factor limiting application and validation of most seizure-prediction techniques is their computational load. For the first time, the reliability of permutation entropy [PE] was verified in seizure prediction on scalp EEG data, contemporarily controlling for its dependency on different vigilance states. PE was recently introduced as an extremely fast and robust complexity measure for chaotic time series and thus suitable for online application even in portable systems. The capability of PE to distinguish between preictal and interictal state has been demonstrated using Receiver Operating Characteristics (ROC) analysis. Correlation analysis was used to assess dependency of PE on vigilance states. Scalp EEG-Data from two right temporal epileptic lobe (RTLE) patients and from one patient with right frontal lobe epilepsy were analysed. The last patient was included only in the correlation analysis, since no datasets including seizures have been available for him. The ROC analysis showed a good separability of interictal and preictal phases for both RTLE patients, suggesting that PE could be sensitive to EEG modifications, not visible on visual inspection, that might occur well in advance respect to the EEG and clinical onset of seizures. However, the simultaneous assessment of the changes in vigilance showed that: a) all seizures occurred in association with the transition of vigilance states; b) PE was sensitive in detecting different vigilance states, independently of seizure occurrences. Due to the limitations of the datasets, these results cannot rule out the capability of PE to detect preictal states. However, the good separability between pre- and interictal phases might depend exclusively on the coincidence of epileptic seizure onset with a transition from a state of low vigilance to a state of increased vigilance. The finding of a dependency of PE on vigilance state is an original finding, not reported in literature, and suggesting the possibility to classify vigilance states by means of PE in an authomatic and objectic way. The second part of my thesis provides the description of a novel behavioral task based on motor imagery skills, firstly introduced (Bruzzo et al. 2007), in order to study mental simulation of biological and non-biological movement in paranoid schizophrenics (PS). Immediately after the presentation of a real movement, participants had to imagine or re-enact the very same movement. By key release and key press respectively, participants had to indicate when they started and ended the mental simulation or the re-enactment, making it feasible to measure the duration of the simulated or re-enacted movements. The proportional error between duration of the re-enacted/simulated movement and the template movement were compared between different conditions, as well as between PS and healthy subjects. Results revealed a double dissociation between the mechanisms of mental simulation involved in biological and non-biologial movement simulation. While for PS were found large errors for simulation of biological movements, while being more acurate than healthy subjects during simulation of non-biological movements. Healthy subjects showed the opposite relationship, making errors during simulation of non-biological movements, but being most accurate during simulation of non-biological movements. However, the good timing precision during re-enactment of the movements in all conditions and in both groups of participants suggests that perception, memory and attention, as well as motor control processes were not affected. Based upon a long history of literature reporting the existence of psychotic episodes in epileptic patients, a longitudinal study, using a slightly modified behavioral paradigm, was carried out with two RTLE patients, one patient with idiopathic generalized epilepsy and one patient with extratemporal lobe epilepsy. Results provide strong evidence for a possibility to predict upcoming seizures in RTLE patients behaviorally. In the last part of the thesis it has been validated a behavioural strategy based on neurobiofeedback training, to voluntarily control seizures and to reduce there frequency. Three epileptic patients were included in this study. The biofeedback was based on monitoring of slow cortical potentials (SCPs) extracted online from scalp EEG. Patients were trained to produce positive shifts of SCPs. After a training phase patients were monitored for 6 months in order to validate the ability of the learned strategy to reduce seizure frequency. Two of the three refractory epileptic patients recruited for this study showed improvements in self-management and reduction of ictal episodes, even six months after the last training session.
Resumo:
This work focuses on magnetohydrodynamic (MHD) mixed convection flow of electrically conducting fluids enclosed in simple 1D and 2D geometries in steady periodic regime. In particular, in Chapter one a short overview is given about the history of MHD, with reference to papers available in literature, and a listing of some of its most common technological applications, whereas Chapter two deals with the analytical formulation of the MHD problem, starting from the fluid dynamic and energy equations and adding the effects of an external imposed magnetic field using the Ohm's law and the definition of the Lorentz force. Moreover a description of the various kinds of boundary conditions is given, with particular emphasis given to their practical realization. Chapter three, four and five describe the solution procedure of mixed convective flows with MHD effects. In all cases a uniform parallel magnetic field is supposed to be present in the whole fluid domain transverse with respect to the velocity field. The steady-periodic regime will be analyzed, where the periodicity is induced by wall temperature boundary conditions, which vary in time with a sinusoidal law. Local balance equations of momentum, energy and charge will be solved analytically and numerically using as parameters either geometrical ratios or material properties. In particular, in Chapter three the solution method for the mixed convective flow in a 1D vertical parallel channel with MHD effects is illustrated. The influence of a transverse magnetic field will be studied in the steady periodic regime induced by an oscillating wall temperature. Analytical and numerical solutions will be provided in terms of velocity and temperature profiles, wall friction factors and average heat fluxes for several values of the governing parameters. In Chapter four the 2D problem of the mixed convective flow in a vertical round pipe with MHD effects is analyzed. Again, a transverse magnetic field influences the steady periodic regime induced by the oscillating wall temperature of the wall. A numerical solution is presented, obtained using a finite element approach, and as a result velocity and temperature profiles, wall friction factors and average heat fluxes are derived for several values of the Hartmann and Prandtl numbers. In Chapter five the 2D problem of the mixed convective flow in a vertical rectangular duct with MHD effects is discussed. As seen in the previous chapters, a transverse magnetic field influences the steady periodic regime induced by the oscillating wall temperature of the four walls. The numerical solution obtained using a finite element approach is presented, and a collection of results, including velocity and temperature profiles, wall friction factors and average heat fluxes, is provided for several values of, among other parameters, the duct aspect ratio. A comparison with analytical solutions is also provided, as a proof of the validity of the numerical method. Chapter six is the concluding chapter, where some reflections on the MHD effects on mixed convection flow will be made, in agreement with the experience and the results gathered in the analyses presented in the previous chapters. In the appendices special auxiliary functions and FORTRAN program listings are reported, to support the formulations used in the solution chapters.
Resumo:
La studio della distribuzione spaziale e temporale della associazioni a foraminiferi planctonici, campionati in zone con differente regime idrografico, ha permesso di comprendere che molte specie possono essere diagnostiche della presenza di diverse masse d’acqua superficiali e sottosuperficiali e di diversi regimi di nutrienti nelle acque oceaniche. Parte di questo lavoro di tesi si basa sullo studio delle associazioni a foraminiferi planctonici attualmente viventi nel Settore Pacifico dell’Oceano Meridionale (Mare di Ross e Zona del Fronte Polare) e nel Mare Mediterraneo (Mar Tirreno Meridionale). L’obiettivo di questo studio è quello di comprendere i fattori (temperatura, salinità, nutrienti etc.) che determinano la distribuzione attuale delle diverse specie al fine di valutarne il valore di “indicatori” (proxies) utili alla ricostruzione degli scenari paleoclimatici e paleoceanografici succedutisi in queste aree. I risultati documentano che la distribuzione delle diverse specie, il numero di individui e le variazioni nella morfologia di alcuni taxa sono correlate alle caratteristiche chimico-fisiche della colonna e alla disponibilità di nutrienti e di clorofilla. La seconda parte del lavoro di tesi ha previsto l’analisi degli isotopi stabili dell’ossigeno e del rapporto Mg/Ca in gusci di N. pachyderma (sin) prelevati da pescate di micro zooplancton (per tarare l’equazione di paleo temperatura) da un box core e da una carota provenienti dalla zona del Fronte Polare (Oceano Pacifico meridionale), al fine di ricostruire le variazioni di temperatura negli ultimi 13 ka e durante la Mid-Pleistocene Revolution. Le temperature, dedotte tramite i valori degli isotopi stabili dell’ossigeno, sono coerenti con le temperature attuali documentate in questa zona e il trend di temperatura è paragonabile a quelli riportati in letteratura anche per eventi climatici come lo Younger Dryas e il mid-Holocene Optimum. I valori del rapporto Mg/Ca misurato tramite due diverse tecniche di analisi (laser ablation e analisi in soluzione) sono risultati sempre molto più alti dei valori riportati in letteratura per la stessa specie. La laser ablation sembra carente dal punto di vista del cleaning del campione e da questo studio emerge che le due tecniche non sono comparabili e che non possono essere usate indifferentemente sullo stesso campione. Per quanto riguarda l’analisi dei campioni in soluzione è stato migliorato il protocollo di cleaning per il trattamento di campioni antartici, che ha permesso di ottenere valori veritieri e utili ai fini delle ricostruzioni di paleotemperatura. Tuttavia, rimane verosimile l’ipotesi che in ambienti particolari come questo, con salinità e temperature molto basse, l’incorporazione del Mg all’interno del guscio risenta delle condizioni particolari e che non segua quindi la relazione esponenziale con la temperatura ampiamente dimostrata ad altre latitudini.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
This study concerns the representation of space in Caribbean literature, both francophone and Anglophone and, in particular, but not only, in the martinican literature, in the works of the authors born in the island. The analysis focus on the second half of the last century, a period in which the martinican production of novels and romances increased considerably, and where the representation and the rule of space had a relevant place. So, the thesis explores the literary modalities of this representation. The work is constituted of 5 chapters and the critical and methodological approaches are both of an analytical and comparative type. The first chapter “The caribbean space: geography, history and society” presents the geographic context, through an analysis of the historical and political major events occurred in the Caribbean archipelago, in particular of the French Antilles, from the first colonization until the départementalisation. The first paragraph “The colonized space: historical-political excursus” the explores the history of the European colonization that marked forever the theatre of the relationship between Europe, Africa and the New World. This social situation take a long and complex process of “Re-appropriation and renegotiation of the space”, (second paragraph) always the space of the Other, that interest both the Antillean society and the writers’ universe. So, a series of questions take place in the third paragraph “Landscape and identity”: what is the function of space in the process of identity construction? What are the literary forms and representations of space in the Caribbean context? Could the writing be a tool of cultural identity definition, both individual and collective? The second chapter “The literary representation of the Antillean space” is a methodological analysis of the notions of literary space and descriptive gender. The first paragraph “The literary space of and in the novel” is an excursus of the theory of such critics like Blanchot, Bachelard, Genette and Greimas, and in particular the recent innovation of the 20th century; the second one “Space of the Antilles, space of the writing” is an attempt to apply this theory to the Antillean literary space. Finally the last paragraph “Signs on the page: the symbolic places of the antillean novel landscape” presents an inventory of the most recurrent antillean places (mornes, ravines, traces, cachots, En-ville,…), symbols of the history and the past, described in literary works, but according to new modalities of representation. The third chapter, the core of the thesis, “Re-drawing the map of the French Antilles” focused the study of space representation on francophone literature, in particular on a selected works of four martinican writers, like Roland Brival, Édouard Glissant, Patrick Chamoiseau and Raphaël Confiant. Through this section, a spatial evolution comes out step by step, from the first to the second paragraph, whose titles are linked together “The novel space evolution: from the forest of the morne… to the jungle of the ville”. The virgin and uncontaminated space of the Antilles, prior to the colonisation, where the Indios lived in harmony with the nature, find a representation in both works of Brival (Le sang du roucou, Le dernier des Aloukous) and of Glissant (Le Quatrième siècle, Ormerod). The arrival of the European colonizer brings a violent and sudden metamorphosis of the originary space and landscape, together with the traditions and culture of the Caraïbes population. These radical changes are visible in the works of Chamoiseau (Chronique des sept misères, Texaco, L’esclave vieil homme et le molosse, Livret des villes du deuxième monde, Un dimanche au cachot) and Confiant (Le Nègre et l’Amiral, Eau de Café, Ravines du devant-jour, Nègre marron) that explore the urban space of the creole En-ville. The fourth chapter represents the “2nd step: the Anglophone novel space” in the exploration of literary representation of space, through an analytical study of the works of three Anglophone writers, the 19th century Lafcadio Hearn (A Midsummer Trip To the West Indies, Two Years in the French West Indies, Youma) and the contemporary authors Derek Walcott (Omeros, Map of the New World, What the Twilight says) and Edward Kamau Brathwaite (The Arrivants: A New World Trilogy). The Anglophone voice of the Caribbean archipelago brings a very interesting contribution to the critical idea of a spatial evolution in the literary representation of space, started with francophone production: “The spatial evolution goes on: from the Martiniques Sketches of Hearn… to the modern bards of Caribbean archipelago” is the new linked title of the two paragraphs. The fifth chapter “Extended look, space shared: the Caribbean archipelago” is a comparative analysis of the results achieved in the prior sections, through a dialogue between all the texts in the first paragraph “Francophone and Anglophone representation of space compared: differences and analogies”. The last paragraph instead is an attempt of re-negotiate the conventional notions of space and place, from a geographical and physical meaning, to the new concept of “commonplace”, not synonym of prejudice, but “common place” of sharing and dialogue. The question sets in the last paragraph “The “commonplaces” of the physical and mental map of the Caribbean archipelago: toward a non-place?” contains the critical idea of the entire thesis.
Resumo:
Leberâs hereditary optic neuropathy (LHON) is a mitochondrial disease characterized by a rapid loss of central vision and optic atrophy, due to the selective degeneration of retinal ganglion cells. The age of onset is around 20, and the degenerative process is fast and usually the second eye becomes affected in weeks or months. Even if this pathology is well known and has been well characterized, there are still open questions on its pathophysiology, such as the male prevalence, the incomplete penetrance and the tissue selectivity. This maternally inherited disease is caused by mutations in mitochondrial encoded genes of NADH ubiquinone oxidoreductase (complex I) of the respiratory chain. The 90% of LHON cases are caused by one of the three common mitochondrial DNA mutations (11778/ND4, 14484/ND6 and 3460/ND1) and the remaining 10% is caused by rare pathogenic mutations, reported in literature in one or few families. Moreover, there is also a small subset of patients reported with new putative pathogenic nucleotide changes, which awaits to be confirmed. We here clarify some molecular aspects of LHON, mainly the incomplete penetrance and the role of rare mtDNA mutations or variants on LHON expression, and attempt a possible therapeutic approach using the cybrids cell model. We generated novel structural models for mitochondrial encoded complex I subunits and a conservation analysis and pathogenicity prediction have been carried out for LHON reported mutations. This in-silico approach allowed us to locate LHON pathogenic mutations in defined and conserved protein domains and can be a useful tool in the analysis of novel mtDNA variants with unclear pathogenic/functional role. Four rare LHON pathogenic mutations have been identified, confirming that the ND1 and ND6 genes are mutational hot spots for LHON. All mutations were previously described at least once and we validated their pathogenic role, suggesting the need for their screening in LHON diagnostic protocols. Two novel mtDNA variants with a possible pathogenic role have been also identified in two independent branches of a large pedigree. Functional studies are necessary to define their contribution to LHON in this family. It also been demonstrated that the combination of mtDNA rare polymorphic variants is relevant in determining the maternal recurrence of myoclonus in unrelated LHON pedigrees. Thus, we suggest that particular mtDNA backgrounds and /or the presence of specific rare mutations may increase the pathogenic potential of the primary LHON mutations, thereby giving rise to the extraocular clinical features characteristic of the LHON âplusâ phenotype. We identified the first molecular parameter that clearly discriminates LHON affected individuals from asymptomatic carriers, the mtDNA copy number. This provides a valuable mechanism for future investigations on variable penetrance in LHON. However, the increased mtDNA content in LHON individuals was not correlated to the functional polymorphism G1444A of PGC-1 alpha, the master regulator of mitochondrial biogenesis, but may be due to gene expression of genes involved in this signaling pathway, such as PGC-1 alpha/beta and Tfam. Future studies will be necessary to identify the biochemical effects of rare pathogenic mutations and to validate the novel candidate mutations here described, in terms of cellular bioenergetic characterization of these variants. Moreover, we were not able to induce mitochondrial biogenesis in cybrids cell lines using bezafibrate. However, other cell line models are available, such as fibroblasts harboring LHON mutations, or other approaches can be used to trigger the mitochondrial biogenesis.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.