18 resultados para Coherent overall approach
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Actual trends in software development are pushing the need to face a multiplicity of diverse activities and interaction styles characterizing complex and distributed application domains, in such a way that the resulting dynamics exhibits some grade of order, i.e. in terms of evolution of the system and desired equilibrium. Autonomous agents and Multiagent Systems are argued in literature as one of the most immediate approaches for describing such a kind of challenges. Actually, agent research seems to converge towards the definition of renewed abstraction tools aimed at better capturing the new demands of open systems. Besides agents, which are assumed as autonomous entities purposing a series of design objectives, Multiagent Systems account new notions as first-class entities, aimed, above all, at modeling institutional/organizational entities, placed for normative regulation, interaction and teamwork management, as well as environmental entities, placed as resources to further support and regulate agent work. The starting point of this thesis is recognizing that both organizations and environments can be rooted in a unifying perspective. Whereas recent research in agent systems seems to account a set of diverse approaches to specifically face with at least one aspect within the above mentioned, this work aims at proposing a unifying approach where both agents and their organizations can be straightforwardly situated in properly designed working environments. In this line, this work pursues reconciliation of environments with sociality, social interaction with environment based interaction, environmental resources with organizational functionalities with the aim to smoothly integrate the various aspects of complex and situated organizations in a coherent programming approach. Rooted in Agents and Artifacts (A&A) meta-model, which has been recently introduced both in the context of agent oriented software engineering and programming, the thesis promotes the notion of Embodied Organizations, characterized by computational infrastructures attaining a seamless integration between agents, organizations and environmental entities.
Resumo:
Myocardial perfusion quantification by means of Contrast-Enhanced Cardiac Magnetic Resonance images relies on time consuming frame-by-frame manual tracing of regions of interest. In this Thesis, a novel automated technique for myocardial segmentation and non-rigid registration as a basis for perfusion quantification is presented. The proposed technique is based on three steps: reference frame selection, myocardial segmentation and non-rigid registration. In the first step, the reference frame in which both endo- and epicardial segmentation will be performed is chosen. Endocardial segmentation is achieved by means of a statistical region-based level-set technique followed by a curvature-based regularization motion. Epicardial segmentation is achieved by means of an edge-based level-set technique followed again by a regularization motion. To take into account the changes in position, size and shape of myocardium throughout the sequence due to out of plane respiratory motion, a non-rigid registration algorithm is required. The proposed non-rigid registration scheme consists in a novel multiscale extension of the normalized cross-correlation algorithm in combination with level-set methods. The myocardium is then divided into standard segments. Contrast enhancement curves are computed measuring the mean pixel intensity of each segment over time, and perfusion indices are extracted from each curve. The overall approach has been tested on synthetic and real datasets. For validation purposes, the sequences have been manually traced by an experienced interpreter, and contrast enhancement curves as well as perfusion indices have been computed. Comparisons between automatically extracted and manually obtained contours and enhancement curves showed high inter-technique agreement. Comparisons of perfusion indices computed using both approaches against quantitative coronary angiography and visual interpretation demonstrated that the two technique have similar diagnostic accuracy. In conclusion, the proposed technique allows fast, automated and accurate measurement of intra-myocardial contrast dynamics, and may thus address the strong clinical need for quantitative evaluation of myocardial perfusion.
Resumo:
Il coordinamento tra sistemi impositivi è una questione originaria e tipica del diritto comunitario. La tesi ne esplora le conseguenze sotto più aspetti.
Resumo:
Nowadays, in developed countries, the excessive food intake, in conjunction with a decreased physical activity, has led to an increase in lifestyle-related diseases, such as obesity, cardiovascular diseases, type -2 diabetes, a range of cancer types and arthritis. The socio-economic importance of such lifestyle-related diseases has encouraged countries to increase their efforts in research, and many projects have been initiated recently in research that focuses on the relationship between food and health. Thanks to these efforts and to the growing availability of technologies, the food companies are beginning to develop healthier food. The necessity of rapid and affordable methods, helping the food industries in the ingredient selection has stimulated the development of in vitro systems that simulate the physiological functions to which the food components are submitted when administrated in vivo. One of the most promising tool now available appears the in vitro digestion, which aims at predicting, in a comparative way among analogue food products, the bioaccessibility of the nutrients of interest.. The adoption of the foodomics approach has been chosen in this work to evaluate the modifications occurring during the in vitro digestion of selected protein-rich food products. The measure of the proteins breakdown was performed via NMR spectroscopy, the only techniques capable of observing, directly in the simulated gastric and duodenal fluids, the soluble oligo- and polypeptides released during the in vitro digestion process. The overall approach pioneered along this PhD work, has been discussed and promoted in a large scientific community, with specialists networked under the INFOGEST COST Action, which recently released a harmonized protocol for the in vitro digestion. NMR spectroscopy, when used in tandem with the in vitro digestion, generates a new concept, which provides an additional attribute to describe the food quality: the comparative digestibility, which measures the improvement of the nutrients bioaccessibility.
Resumo:
This work is structured as follows: In Section 1 we discuss the clinical problem of heart failure. In particular, we present the phenomenon known as ventricular mechanical dyssynchrony: its impact on cardiac function, the therapy for its treatment and the methods for its quantification. Specifically, we describe the conductance catheter and its use for the measurement of dyssynchrony. At the end of the Section 1, we propose a new set of indexes to quantify the dyssynchrony that are studied and validated thereafter. In Section 2 we describe the studies carried out in this work: we report the experimental protocols, we present and discuss the results obtained. Finally, we report the overall conclusions drawn from this work and we try to envisage future works and possible clinical applications of our results. Ancillary studies that were carried out during this work mainly to investigate several aspects of cardiac resynchronization therapy (CRT) are mentioned in Appendix. -------- Ventricular mechanical dyssynchrony plays a regulating role already in normal physiology but is especially important in pathological conditions, such as hypertrophy, ischemia, infarction, or heart failure (Chapter 1,2.). Several prospective randomized controlled trials supported the clinical efficacy and safety of cardiac resynchronization therapy (CRT) in patients with moderate or severe heart failure and ventricular dyssynchrony. CRT resynchronizes ventricular contraction by simultaneous pacing of both left and right ventricle (biventricular pacing) (Chapter 1.). Currently, the conductance catheter method has been used extensively to assess global systolic and diastolic ventricular function and, more recently, the ability of this instrument to pick-up multiple segmental volume signals has been used to quantify mechanical ventricular dyssynchrony. Specifically, novel indexes based on volume signals acquired with the conductance catheter were introduced to quantify dyssynchrony (Chapter 3,4.). Present work was aimed to describe the characteristics of the conductancevolume signals, to investigate the performance of the indexes of ventricular dyssynchrony described in literature and to introduce and validate improved dyssynchrony indexes. Morevoer, using the conductance catheter method and the new indexes, the clinical problem of the ventricular pacing site optimization was addressed and the measurement protocol to adopt for hemodynamic tests on cardiac pacing was investigated. In accordance to the aims of the work, in addition to the classical time-domain parameters, a new set of indexes has been extracted, based on coherent averaging procedure and on spectral and cross-spectral analysis (Chapter 4.). Our analyses were carried out on patients with indications for electrophysiologic study or device implantation (Chapter 5.). For the first time, besides patients with heart failure, indexes of mechanical dyssynchrony based on conductance catheter were extracted and studied in a population of patients with preserved ventricular function, providing information on the normal range of such a kind of values. By performing a frequency domain analysis and by applying an optimized coherent averaging procedure (Chapter 6.a.), we were able to describe some characteristics of the conductance-volume signals (Chapter 6.b.). We unmasked the presence of considerable beat-to-beat variations in dyssynchrony that seemed more frequent in patients with ventricular dysfunction and to play a role in discriminating patients. These non-recurrent mechanical ventricular non-uniformities are probably the expression of the substantial beat-to-beat hemodynamic variations, often associated with heart failure and due to cardiopulmonary interaction and conduction disturbances. We investigated how the coherent averaging procedure may affect or refine the conductance based indexes; in addition, we proposed and tested a new set of indexes which quantify the non-periodic components of the volume signals. Using the new set of indexes we studied the acute effects of the CRT and the right ventricular pacing, in patients with heart failure and patients with preserved ventricular function. In the overall population we observed a correlation between the hemodynamic changes induced by the pacing and the indexes of dyssynchrony, and this may have practical implications for hemodynamic-guided device implantation. The optimal ventricular pacing site for patients with conventional indications for pacing remains controversial. The majority of them do not meet current clinical indications for CRT pacing. Thus, we carried out an analysis to compare the impact of several ventricular pacing sites on global and regional ventricular function and dyssynchrony (Chapter 6.c.). We observed that right ventricular pacing worsens cardiac function in patients with and without ventricular dysfunction unless the pacing site is optimized. CRT preserves left ventricular function in patients with normal ejection fraction and improves function in patients with poor ejection fraction despite no clinical indication for CRT. Moreover, the analysis of the results obtained using new indexes of regional dyssynchrony, suggests that pacing site may influence overall global ventricular function depending on its relative effects on regional function and synchrony. Another clinical problem that has been investigated in this work is the optimal right ventricular lead location for CRT (Chapter 6.d.). Similarly to the previous analysis, using novel parameters describing local synchrony and efficiency, we tested the hypothesis and we demonstrated that biventricular pacing with alternative right ventricular pacing sites produces acute improvement of ventricular systolic function and improves mechanical synchrony when compared to standard right ventricular pacing. Although no specific right ventricular location was shown to be superior during CRT, the right ventricular pacing site that produced the optimal acute hemodynamic response varied between patients. Acute hemodynamic effects of cardiac pacing are conventionally evaluated after stabilization episodes. The applied duration of stabilization periods in most cardiac pacing studies varied considerably. With an ad hoc protocol (Chapter 6.e.) and indexes of mechanical dyssynchrony derived by conductance catheter we demonstrated that the usage of stabilization periods during evaluation of cardiac pacing may mask early changes in systolic and diastolic intra-ventricular dyssynchrony. In fact, at the onset of ventricular pacing, the main dyssynchrony and ventricular performance changes occur within a 10s time span, initiated by the changes in ventricular mechanical dyssynchrony induced by aberrant conduction and followed by a partial or even complete recovery. It was already demonstrated in normal animals that ventricular mechanical dyssynchrony may act as a physiologic modulator of cardiac performance together with heart rate, contractile state, preload and afterload. The present observation, which shows the compensatory mechanism of mechanical dyssynchrony, suggests that ventricular dyssynchrony may be regarded as an intrinsic cardiac property, with baseline dyssynchrony at increased level in heart failure patients. To make available an independent system for cardiac output estimation, in order to confirm the results obtained with conductance volume method, we developed and validated a novel technique to apply the Modelflow method (a method that derives an aortic flow waveform from arterial pressure by simulation of a non-linear three-element aortic input impedance model, Wesseling et al. 1993) to the left ventricular pressure signal, instead of the arterial pressure used in the classical approach (Chapter 7.). The results confirmed that in patients without valve abnormalities, undergoing conductance catheter evaluations, the continuous monitoring of cardiac output using the intra-ventricular pressure signal is reliable. Thus, cardiac output can be monitored quantitatively and continuously with a simple and low-cost method. During this work, additional studies were carried out to investigate several areas of uncertainty of CRT. The results of these studies are briefly presented in Appendix: the long-term survival in patients treated with CRT in clinical practice, the effects of CRT in patients with mild symptoms of heart failure and in very old patients, the limited thoracotomy as a second choice alternative to transvenous implant for CRT delivery, the evolution and prognostic significance of diastolic filling pattern in CRT, the selection of candidates to CRT with echocardiographic criteria and the prediction of response to the therapy.
Resumo:
The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.
Resumo:
The topics I came across during the period I spent as a Ph.D. student are mainly two. The first concerns new organocatalytic protocols for Mannich-type reactions mediated by Cinchona alkaloids derivatives (Scheme I, left); the second topic, instead, regards the study of a new approach towards the enantioselective total synthesis of Aspirochlorine, a potent gliotoxin that recent studies indicate as a highly selective and active agent against fungi (Scheme I, right). At the beginning of 2005 I had the chance to join the group of Prof. Alfredo Ricci at the Department of Organic Chemistry of the University of Bologna, starting my PhD studies. During the first period I started to study a new homogeneous organocatalytic aza-Henry reaction by means of Cinchona alkaloid derivatives as chiral base catalysts with good results. Soon after we introduced a new protocol which allowed the in situ synthesis of N-carbamoyl imines, scarcely stable, moisture sensitive compounds. For this purpose we used α-amido sulfones, bench stable white crystalline solids, as imine precursors (Scheme II). In particular we were able to obtain the aza-Henry adducts, by using chiral phase transfer catalysis, with a broad range of substituents as R-group and excellent results, unprecedented for Mannich-type transformations (Scheme II). With the optimised protocol in hand we have extended the methodology to the other Mannich-type reactions. We applied the new method to the Mannich, Strecker and Pudovik (hydrophosphonylation of imines) reactions with very good results in terms of enantioselections and yields, broadening the usefulness of this novel protocol. The Mannich reaction was certainly the most extensively studied work in this thesis (Scheme III). Initially we developed the reaction with α-amido sulfones as imine precursors and non-commercially available malonates with excellent results in terms of yields and enantioselections.3 In this particular case we recorded 1 mol% of catalyst loading, very low for organocatalytic processes. Then we thought to develop a new Mannich reaction by using simpler malonates, such as dimethyl malonate.4 With new optimised condition the reaction provided slightly lower enantioselections than the previous protocol, but the Mannich adducts were very versatile for the obtainment of β3-amino acids. Furthermore we performed the first addition of cyclic β-ketoester to α-amido sulfones obtaining the corresponding products in good yield with high level of diastereomeric and enantiomeric excess (Scheme III). Further studies were done about the Strecker reaction mediated by Cinchona alkaloid phase-transfer quaternary ammonium salt derivatives, using acetone cyanohydrin, a relatively harmless cyanide source (Scheme IV). The reaction proceeded very well providing the corresponding α-amino nitriles in good yields and enantiomeric excesses. Finally, we developed two new complementary methodologies for the hydrophosphonylation of imines (Scheme V). As a result of the low stability of the products derived from aromatic imines, we performed the reactions in mild homogeneous basic condition by using quinine as a chiral base catalyst giving the α-aryl-α-amido phosphonic acid esters as products (Scheme V, top).6 On the other hand, we performed the addition of dialkyl phosphite to aliphatic imines by using chiral Cinchona alkaloid phase transfer quaternary ammonium salt derivatives using our methodology based on α-amido sulfones (Scheme V, bottom). The results were good for both procedures covering a broad range of α-amino phosphonic acid ester. During the second year Ph.D. studies, I spent six months in the group of Prof. Steven V. Ley, at the Department of Chemistry of the University of Cambridge, in United Kingdom. During this fruitful period I have been involved in a project concerning the enantioselective synthesis of Aspirochlorine. We provided a new route for the synthesis of a key intermediate, reducing the number of steps and increasing the overall yield. Then we introduced a new enantioselective spirocyclisation for the synthesis of a chiral building block for the completion of the synthesis (Scheme VI).
Resumo:
Customer satisfaction has been traditionally studied and measured regardless of the time elapsed since the purchase. Some studies have recently reopened the debate about the temporal pattern of satisfaction. This research aims to explain why “how you evaluate a service depends on when you evaluate it” on the basis of the theoretical framework proposed by Construal-Level Theory (CLT). Although an empirical investigation is still lacking, the literature does not deny that CLT can be applied also with regard to past events. Moreover, some studies support the idea that satisfaction is a good predictor of future intentions, while others do not. On the basis of CLT, we argue that these inconsistent results are due to the different construal levels of the information pertaining to retrospective and prospective evaluations. Building on the Two-Factor Theory, we explain the persistence of certain attributes’ representations over time according to their relationship with overall performance. We present and discuss three experiments and one field study that were conducted a) to test the extensibility of CLT to past events, b) to disentangle memory and construal effects, c) to study the effect of different temporal perspective on overall satisfaction judgements, and d) to investigate the temporal shift of the determinants of customer satisfaction as a function of temporal distance.
Resumo:
The treatment of the Cerebral Palsy (CP) is considered as the “core problem” for the whole field of the pediatric rehabilitation. The reason why this pathology has such a primary role, can be ascribed to two main aspects. First of all CP is the form of disability most frequent in childhood (one new case per 500 birth alive, (1)), secondarily the functional recovery of the “spastic” child is, historically, the clinical field in which the majority of the therapeutic methods and techniques (physiotherapy, orthotic, pharmacologic, orthopedic-surgical, neurosurgical) were first applied and tested. The currently accepted definition of CP – Group of disorders of the development of movement and posture causing activity limitation (2) – is the result of a recent update by the World Health Organization to the language of the International Classification of Functioning Disability and Health, from the original proposal of Ingram – A persistent but not unchangeable disorder of posture and movement – dated 1955 (3). This definition considers CP as a permanent ailment, i.e. a “fixed” condition, that however can be modified both functionally and structurally by means of child spontaneous evolution and treatments carried out during childhood. The lesion that causes the palsy, happens in a structurally immature brain in the pre-, peri- or post-birth period (but only during the firsts months of life). The most frequent causes of CP are: prematurity, insufficient cerebral perfusion, arterial haemorrhage, venous infarction, hypoxia caused by various origin (for example from the ingestion of amniotic liquid), malnutrition, infection and maternal or fetal poisoning. In addition to these causes, traumas and malformations have to be included. The lesion, whether focused or spread over the nervous system, impairs the whole functioning of the Central Nervous System (CNS). As a consequence, they affect the construction of the adaptive functions (4), first of all posture control, locomotion and manipulation. The palsy itself does not vary over time, however it assumes an unavoidable “evolutionary” feature when during growth the child is requested to meet new and different needs through the construction of new and different functions. It is essential to consider that clinically CP is not only a direct expression of structural impairment, that is of etiology, pathogenesis and lesion timing, but it is mainly the manifestation of the path followed by the CNS to “re”-construct the adaptive functions “despite” the presence of the damage. “Palsy” is “the form of the function that is implemented by an individual whose CNS has been damaged in order to satisfy the demands coming from the environment” (4). Therefore it is only possible to establish general relations between lesion site, nature and size, and palsy and recovery processes. It is quite common to observe that children with very similar neuroimaging can have very different clinical manifestations of CP and, on the other hand, children with very similar motor behaviors can have completely different lesion histories. A very clear example of this is represented by hemiplegic forms, which show bilateral hemispheric lesions in a high percentage of cases. The first section of this thesis is aimed at guiding the interpretation of CP. First of all the issue of the detection of the palsy is treated from historical viewpoint. Consequently, an extended analysis of the current definition of CP, as internationally accepted, is provided. The definition is then outlined in terms of a space dimension and then of a time dimension, hence it is highlighted where this definition is unacceptably lacking. The last part of the first section further stresses the importance of shifting from the traditional concept of CP as a palsy of development (defect analysis) towards the notion of development of palsy, i.e., as the product of the relationship that the individual however tries to dynamically build with the surrounding environment (resource semeiotics) starting and growing from a different availability of resources, needs, dreams, rights and duties (4). In the scientific and clinic community no common classification system of CP has so far been universally accepted. Besides, no standard operative method or technique have been acknowledged to effectively assess the different disabilities and impairments exhibited by children with CP. CP is still “an artificial concept, comprising several causes and clinical syndromes that have been grouped together for a convenience of management” (5). The lack of standard and common protocols able to effectively diagnose the palsy, and as a consequence to establish specific treatments and prognosis, is mainly because of the difficulty to elevate this field to a level based on scientific evidence. A solution aimed at overcoming the current incomplete treatment of CP children is represented by the clinical systematic adoption of objective tools able to measure motor defects and movement impairments. A widespread application of reliable instruments and techniques able to objectively evaluate both the form of the palsy (diagnosis) and the efficacy of the treatments provided (prognosis), constitutes a valuable method able to validate care protocols, establish the efficacy of classification systems and assess the validity of definitions. Since the ‘80s, instruments specifically oriented to the analysis of the human movement have been advantageously designed and applied in the context of CP with the aim of measuring motor deficits and, especially, gait deviations. The gait analysis (GA) technique has been increasingly used over the years to assess, analyze, classify, and support the process of clinical decisions making, allowing for a complete investigation of gait with an increased temporal and spatial resolution. GA has provided a basis for improving the outcome of surgical and nonsurgical treatments and for introducing a new modus operandi in the identification of defects and functional adaptations to the musculoskeletal disorders. Historically, the first laboratories set up for gait analysis developed their own protocol (set of procedures for data collection and for data reduction) independently, according to performances of the technologies available at that time. In particular, the stereophotogrammetric systems mainly based on optoelectronic technology, soon became a gold-standard for motion analysis. They have been successfully applied especially for scientific purposes. Nowadays the optoelectronic systems have significantly improved their performances in term of spatial and temporal resolution, however many laboratories continue to use the protocols designed on the technology available in the ‘70s and now out-of-date. Furthermore, these protocols are not coherent both for the biomechanical models and for the adopted collection procedures. In spite of these differences, GA data are shared, exchanged and interpreted irrespectively to the adopted protocol without a full awareness to what extent these protocols are compatible and comparable with each other. Following the extraordinary advances in computer science and electronics, new systems for GA no longer based on optoelectronic technology, are now becoming available. They are the Inertial and Magnetic Measurement Systems (IMMSs), based on miniature MEMS (Microelectromechanical systems) inertial sensor technology. These systems are cost effective, wearable and fully portable motion analysis systems, these features gives IMMSs the potential to be used both outside specialized laboratories and to consecutive collect series of tens of gait cycles. The recognition and selection of the most representative gait cycle is then easier and more reliable especially in CP children, considering their relevant gait cycle variability. The second section of this thesis is focused on GA. In particular, it is firstly aimed at examining the differences among five most representative GA protocols in order to assess the state of the art with respect to the inter-protocol variability. The design of a new protocol is then proposed and presented with the aim of achieving gait analysis on CP children by means of IMMS. The protocol, named ‘Outwalk’, contains original and innovative solutions oriented at obtaining joint kinematic with calibration procedures extremely comfortable for the patients. The results of a first in-vivo validation of Outwalk on healthy subjects are then provided. In particular, this study was carried out by comparing Outwalk used in combination with an IMMS with respect to a reference protocol and an optoelectronic system. In order to set a more accurate and precise comparison of the systems and the protocols, ad hoc methods were designed and an original formulation of the statistical parameter coefficient of multiple correlation was developed and effectively applied. On the basis of the experimental design proposed for the validation on healthy subjects, a first assessment of Outwalk, together with an IMMS, was also carried out on CP children. The third section of this thesis is dedicated to the treatment of walking in CP children. Commonly prescribed treatments in addressing gait abnormalities in CP children include physical therapy, surgery (orthopedic and rhizotomy), and orthoses. The orthotic approach is conservative, being reversible, and widespread in many therapeutic regimes. Orthoses are used to improve the gait of children with CP, by preventing deformities, controlling joint position, and offering an effective lever for the ankle joint. Orthoses are prescribed for the additional aims of increasing walking speed, improving stability, preventing stumbling, and decreasing muscular fatigue. The ankle-foot orthosis (AFO), with a rigid ankle, are primarily designed to prevent equinus and other foot deformities with a positive effect also on more proximal joints. However, AFOs prevent the natural excursion of the tibio-tarsic joint during the second rocker, hence hampering the natural leaning progression of the whole body under the effect of the inertia (6). A new modular (submalleolar) astragalus-calcanear orthosis, named OMAC, has recently been proposed with the intention of substituting the prescription of AFOs in those CP children exhibiting a flat and valgus-pronated foot. The aim of this section is thus to present the mechanical and technical features of the OMAC by means of an accurate description of the device. In particular, the integral document of the deposited Italian patent, is provided. A preliminary validation of OMAC with respect to AFO is also reported as resulted from an experimental campaign on diplegic CP children, during a three month period, aimed at quantitatively assessing the benefit provided by the two orthoses on walking and at qualitatively evaluating the changes in the quality of life and motor abilities. As already stated, CP is universally considered as a persistent but not unchangeable disorder of posture and movement. Conversely to this definition, some clinicians (4) have recently pointed out that movement disorders may be primarily caused by the presence of perceptive disorders, where perception is not merely the acquisition of sensory information, but an active process aimed at guiding the execution of movements through the integration of sensory information properly representing the state of one’s body and of the environment. Children with perceptive impairments show an overall fear of moving and the onset of strongly unnatural walking schemes directly caused by the presence of perceptive system disorders. The fourth section of the thesis thus deals with accurately defining the perceptive impairment exhibited by diplegic CP children. A detailed description of the clinical signs revealing the presence of the perceptive impairment, and a classification scheme of the clinical aspects of perceptual disorders is provided. In the end, a functional reaching test is proposed as an instrumental test able to disclosure the perceptive impairment. References 1. Prevalence and characteristics of children with cerebral palsy in Europe. Dev Med Child Neurol. 2002 Set;44(9):633-640. 2. Bax M, Goldstein M, Rosenbaum P, Leviton A, Paneth N, Dan B, et al. Proposed definition and classification of cerebral palsy, April 2005. Dev Med Child Neurol. 2005 Ago;47(8):571-576. 3. Ingram TT. A study of cerebral palsy in the childhood population of Edinburgh. Arch. Dis. Child. 1955 Apr;30(150):85-98. 4. Ferrari A, Cioni G. The spastic forms of cerebral palsy : a guide to the assessment of adaptive functions. Milan: Springer; 2009. 5. Olney SJ, Wright MJ. Cerebral Palsy. Campbell S et al. Physical Therapy for Children. 2nd Ed. Philadelphia: Saunders. 2000;:533-570. 6. Desloovere K, Molenaers G, Van Gestel L, Huenaerts C, Van Campenhout A, Callewaert B, et al. How can push-off be preserved during use of an ankle foot orthosis in children with hemiplegia? A prospective controlled study. Gait Posture. 2006 Ott;24(2):142-151.
Resumo:
The possibility of combining different functionalities in a single device is of great relevance for further development of organic electronics in integrated components and circuitry. Organic light-emitting transistors (OLETs) have been demonstrated to be able to combine in a single device the electrical switching functionality of a field-effect transistor and the capability of light generation. A novel strategy in OLET realization is the tri-layer vertical hetero-junction. This configuration is similar to the bi-layer except for the presence of a new middle layer between the two transport layers. This “recombination” layer presents high emission quantum efficiency and OLED-like (Organic Light-Emitting Diode) vertical bulk mobility value. The key idea of the vertical tri-layer hetero-junction approach in realizing OLETs is that each layer has to be optimized according to its specific function (charge transport, energy transfer, radiative exciton recombination). Clearly, matching the overall device characteristics with the functional properties of the single materials composing the active region of the OFET, is a great challenge that requires a deep investigation of the morphological, optical and electrical features of the system. As in the case of the bi-layer based OLETs, it is clear that the interfaces between the dielectric and the bottom transport layer and between the recombination and the top transport layer are crucial for guaranteeing good ambipolar field-effect electrical characteristics. Moreover interfaces between the bottom transport and the recombination layer and between the recombination and the top transport layer should provide the favourable conditions for the charge percolation to happen in the recombination layer and form excitons. Organic light emitting transistor based on the tri-layer approach with external quantum efficiency out-performing the OLED state of the art has been recently demonstrated [Capelli et al., Nat. Mater. 9 (2010) 496-503] widening the scientific and technological interest in this field of research.
Resumo:
Over the past few years, the switch towards renewable sources for energy production is considered as necessary for the future sustainability of the world environment. Hydrogen is one of the most promising energy vectors for the stocking of low density renewable sources such as wind, biomasses and sun. The production of hydrogen by the steam-iron process could be one of the most versatile approaches useful for the employment of different reducing bio-based fuels. The steam iron process is a two-step chemical looping reaction based (i) on the reduction of an iron-based oxide with an organic compound followed by (ii) a reoxidation of the reduced solid material by water, which lead to the production of hydrogen. The overall reaction is the water oxidation of the organic fuel (gasification or reforming processes) but the inherent separation of the two semireactions allows the production of carbon-free hydrogen. In this thesis, steam-iron cycle with methanol is proposed and three different oxides with the generic formula AFe2O4 (A=Co,Ni,Fe) are compared in order to understand how the chemical properties and the structural differences can affect the productivity of the overall process. The modifications occurred in used samples are deeply investigated by the analysis of used materials. A specific study on CoFe2O4-based process using both classical and in-situ/ex-situ analysis is reported employing many characterization techniques such as FTIR spectroscopy, TEM, XRD, XPS, BET, TPR and Mössbauer spectroscopy.
Resumo:
Workaholism is defined as the combination of two underlying dimensions: working excessively and working compulsively. The present thesis aims at achieving the following purposes: 1) to test whether the interaction between environmental and personal antecedents may enhance workaholism; 2) to develop a questionnaire aimed to assess overwork climate in the workplace; 3) to contrast focal employees’ and coworkers’ perceptions of employees’ workaholism and engagement. Concerning the first purpose, the interaction between overwork climate and person characteristics (achievement motivation, perfectionism, conscientiousness, self-efficacy) was explored on a sample of 333 Dutch employees. The results of moderated regression analyses showed that the interaction between overwork climate and person characteristics is related to workaholism. The second purpose was pursued with two interrelated studies. In Study 1 the Overwork Climate Scale (OWCS) was developed and tested using a principal component analysis (N = 395) and a confirmatory factor analysis (N = 396). Two overwork climate dimensions were distinguished, overwork endorsement and lacking overwork rewards. In Study 2 the total sample (N = 791) was used to explore the association of overwork climate with two types of working hard: work engagement and workaholism. Lacking overwork rewards was negatively associated with engagement, whereas overwork endorsement showed a positive association with workaholism. Concerning the third purpose, using a sample of 73 dyads composed by focal employees and their coworkers, a multitrait-multimethod matrix and a correlated trait-correlated method model, i.e. the CT-C(M–1) model, were examined. Our results showed a considerable agreement between raters on focal employees' engagement and workaholism. In contrast, we observed a significant difference concerning the cognitive dimension of workaholism, working compulsively. Moreover, we provided further evidence for the discriminant validity between engagement and workaholism. Overall, workaholism appears as a negative work-related state that could be better explained by assuming a multi-causal and multi-rater approach.
Resumo:
Class I phosphatidylinositol 3-kinases (PI3Ks) are heterodimeric lipid kinases consisting of a regulatory subunit and one of four catalytic subunits (p110α, p110β, p110γ or p110δ). p110γ/p110δ PI3Ks are highly enriched in leukocytes. In general, PI3Ks regulate a variety of cellular processes including cell proliferation, survival and metabolism, by generating the second messenger phosphatidylinositol-3,4,5-trisphosphate (PtdIns(3,4,5)P3). Their activity is tightly regulated by the phosphatase and tensin homolog (PTEN) lipid phosphatase. PI3Ks are widely implicated in human cancers, and in particular are upregulated in T-cell acute lymphoblastic leukemia (T-ALL), mainly due to loss of PTEN function. These observations lend compelling weight to the application of PI3K inhibitors in the therapy of T-ALL. At present different compounds which target single or multiple PI3K isoforms have entered clinical trials. In the present research, it has been analyzed the therapeutic potential of the pan-PI3K inhibitor BKM120, an orally bioavailable 2,6-dimorpholino pyrimidine derivative, which has entered clinical trials for solid tumors, on both T-ALL cell lines and patient samples. BKM120 treatment resulted in cell cycle arrest and apoptosis, being cytotoxic to a panel of T-ALL cell lines and patient T-lymphoblasts. Remarkably, BKM120 synergized with chemotherapeutic agents currently used for treating T-ALL patients. BKM120 efficacy was confirmed in in vivo studies to a subcutaneous xenotransplant model of human T-ALL. Because it is still unclear which agents among isoform-specific or pan inhibitors can achieve the greater efficacy, further analyses have been conducted to investigate the effects of PI3K inhibition, in order to elucidate the mechanisms responsible for the proliferative impairment of T-ALL. Overall, these results indicated that BKM120 may be an efficient treatment for T-ALLs that have aberrant up-regulation of the PI3K signaling pathway and strongly support clinical application of pan-class I PI3K rather than single-isoform inhibitors in T-ALL treatment.
Resumo:
It is well known that ageing and cancer have common origins due to internal and environmental stress and share some common hallmarks such as genomic instability, epigenetic alteration, aberrant telomeres, inflammation and immune injury. Moreover, ageing is involved in a number of events responsible for carcinogenesis and cancer development at the molecular, cellular, and tissue levels. Ageing could represent a “blockbuster” market because the target patient group includes potentially every person; at the same time, oncology has become the largest therapeutic area in the pharmaceutical industry in terms of the number of projects, clinical trials and research and development (R&D) spending, but cancer remains one of the leading causes of mortality worldwide. The overall aim of the work presented in this thesis was the rational design of new compounds able to modulate activity of relevant targets involved in cancer and aging-related pathologies, namely proteasome and immunoproteasome, sirtuins and interleukin 6. These three targets play different roles in human cells, but the modulation of its activity using small molecules could have beneficial effects on one or more aging-related diseases and cancer. We identified new moderately active and selective non-peptidic compounds able to inhibit the activity of both standard and immunoproteasome, as well as novel and selective scaffolds that would bind and inhibit SIRT6 selectively and can be used to sensitize tumor cells to commonly used anticancer agents such gemcitabine and olaparib. Moreover, our virtual screening approach led us also to the discovery of new putative modulators of SIRT3 with interesting in-vitro and cellular activity. Although the selectivity and potency of the identified chemical scaffolds are susceptible to be further improved, these compounds can be considered as highly promising leads for the development of future therapeutics.
Resumo:
Forest models are tools for explaining and predicting the dynamics of forest ecosystems. They simulate forest behavior by integrating information on the underlying processes in trees, soil and atmosphere. Bayesian calibration is the application of probability theory to parameter estimation. It is a method, applicable to all models, that quantifies output uncertainty and identifies key parameters and variables. This study aims at testing the Bayesian procedure for calibration to different types of forest models, to evaluate their performances and the uncertainties associated with them. In particular,we aimed at 1) applying a Bayesian framework to calibrate forest models and test their performances in different biomes and different environmental conditions, 2) identifying and solve structure-related issues in simple models, and 3) identifying the advantages of additional information made available when calibrating forest models with a Bayesian approach. We applied the Bayesian framework to calibrate the Prelued model on eight Italian eddy-covariance sites in Chapter 2. The ability of Prelued to reproduce the estimated Gross Primary Productivity (GPP) was tested over contrasting natural vegetation types that represented a wide range of climatic and environmental conditions. The issues related to Prelued's multiplicative structure were the main topic of Chapter 3: several different MCMC-based procedures were applied within a Bayesian framework to calibrate the model, and their performances were compared. A more complex model was applied in Chapter 4, focusing on the application of the physiology-based model HYDRALL to the forest ecosystem of Lavarone (IT) to evaluate the importance of additional information in the calibration procedure and their impact on model performances, model uncertainties, and parameter estimation. Overall, the Bayesian technique proved to be an excellent and versatile tool to successfully calibrate forest models of different structure and complexity, on different kind and number of variables and with a different number of parameters involved.