20 resultados para RECENT HUMAN-EVOLUTION
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Mollusk shells are often found in archeological sites, given their great preservation potential and high value as a multipurpose resource. They are often the only available material to use for radiocarbon dating, due to a lack of well-preserved bones in many archeological sites, especially for the key period of the Middle to Upper Paleolithic transition. However, radiocarbon dating on mollusk shells is often regarded as less reliable compared to bones, wood, or charcoals due to the various factors influencing their radiocarbon content (e.g., Isotope fractionation, marine reservoir effect etc.). For the development of more accurate chronologies using shells, it is fundamental to continue improving the precision of the techniques applied, as has been done for other materials (wood and bones). Thus, improving the chemical pretreatment on mollusk shells might allow researchers to obtain more reliable radiocarbon determinations allowing for the construction of new radiocarbon chronologies in archeological sites where so far it has not been possible. Furthermore, mollusk shells can provide information on the climatic and environmental variables present during their growth. Using shells for paleoclimatic reconstruction adds more evidence helpful for the interpretation of scenarios of human migration, adaptation, and behavior. Standard methods for both radiocarbon and stable isotope studies use the carbonate fraction of the shell. However, being biogenic structures, mollusk shells also consist of a minor organic fraction. The shell organic matrix has an important role in the formation of the calcium carbonate structure and is still not fully understood. This thesis explores the potential of using the shell organic matrix for radiocarbon dating and paleoenvironmental studies. The results of the work performed for this thesis represent a starting point for future research to build on, and further develop the approach and methodology proposed here.
Resumo:
Although ability to digest lactose generally declines after weaning in all mammals, in some human populations it persists also in adult individuals, a condition named lactase persistence (LP). Studies on the prevalence of the LP phenotype in worldwide human populations have shown that the frequency of this trait is highly variable in different ethnic groups, appearing to be positively correlated with the importance of milk in the diet. In particular, several single-nucleotide polymorphisms (SNPs) in the proximity of the LCT gene have been proved to be associated with LP. Nevertheless, few studies have till now analyzed genetic variation underlying LP in a wide set of Eurasian populations and, especially, in the Italian one. In the present study, we thus typed 40 SNPs surrounding the LCT gene in more than 1,000 samples from Italian and Arabic peninsulas to investigate patterns of LP-related genetic diversity in two regions which have played a pivotal role in the recent human evolutionary history according to their geographical position and historical/archaeological records. Our results underline a high and complex variability of the explored genomic region in both studied populations. In particular, a clear diversification of Northern Italian groups from the rest of the peninsula, was observed, with the formers being genetically more similar to Northern European populations than to Southern Italians. These observation are consistent with known decreasing pattern of LP from Northern to Southern Italy and suggest the possibility of an independent evolution of LP-associated genotypes in Northern Italy. A similar scenario was observed in the Arabian peninsula, with Dhofari Arabs from Southern Oman and Yemeni clustering together with respect to Arabs from Northern Oman and the subgroup of Omanis of Asian origin which appeared instead to be genetically closer to Europeans than to the rest of Arabic groups.
Resumo:
Animal models have been relevant to study the molecular mechanisms of cancer and to develop new antitumor agents. Anyway, the huge divergence in mouse and human evolution made difficult the translation of the gained achievements in preclinical mouse based studies. The generation of clinically relevant murine models requires their humanization both concerning the creation of transgenic models and the generation of humanized mice in which to engraft a functional human immune system, and reproduce the physiological effects and molecular mechanisms of growth and metastasization of human tumors. In particular, the availability of genotypically stable immunodepressed mice able to accept tumor injection and allow human tumor growth and metastasization would be important to develop anti-tumor and anti-metastatic strategies. Recently, Rag2-/-;gammac-/- mice, double knockout for genes involved in lymphocyte differentiation, had been developed (CIEA, Central Institute for Experimental Animals, Kawasaki, Japan). Studies of human sarcoma metastasization in Rag2-/-; gammac-/- mice (lacking B, T and NK functionality) revealed their high metastatic efficiency and allowed the expression of human metastatic phenotypes not detectable in the conventionally used nude murine model. In vitro analysis to investigate the molecular mechanisms involved in the specific pattern of human sarcomas metastasization revealed the importance of liver-produced growth and motility factors, in particular the insulin-like growth factors (IGFs). The involvement of this growth factor was then demonstrated in vivo through inhibition of IGF signalling pathway. Due to the high growth and metastatic propensity of tumor cells, Rag2-/-;gammac-/- mice were used as model to investigate the metastatic behavior of rhabdomyosarcoma cells engineered to improve the differentiation. It has been recently shown that this immunodeficient model can be reconstituted with a human immune system through the injection of human cord blood progenitor cells. The work illustrated in this thesis revealed that the injection of different human progenitor cells (CD34+ or CD133+) showed peculiar engraftment and differentiation abilities. Experiments of cell vaccination were performed to investigate the functionality of the engrafted human immune system and the induction of specific human immune responses. Results from such experiments will allow to collect informations about human immune responses activated during cell vaccination and to define the best reconstitution and experimental conditions to create a humanized model in which to study, in a preclinical setting, immunological antitumor strategies.
Resumo:
The year 14,226 BP marks an important border in the actual radiocarbon (14C) calibration curve: the high resolution and precision characterising the first part (0 – 14,226 BP) of the curve are due to the potential represented by tree-ring datasets, which directly provide the atmospheric 14C content at the time of tree-rings formation with high resolution. They systematically decrease going back in time, where only a few floating tree-ring chronologies alternate to other low-resolution records. The lack of resolution in the dating procedure before 14,226 years BP leads to significant issues in the interpretation and untangling of tricky facts of our past, in the field of Human Evolution. Research on sub-fossil trees and the construction of new Glacial tree-ring chronologies can significantly improve the radiocarbon dating in terms of temporal resolution and precision until 55,000 years BP to clear puzzles in the Human Evolution history. In this thesis, the dendrochronological study, the radiocarbon dating and the extrapolation of environmental and climate information from sub-fossil trees found on the Portugal foreshore, remnants of a Glacial lagoonal forest, are presented. The careful sampling, the dendrochronological measurements and cross-dating, the application of the most suitable cellulose extraction protocol and the most advanced technologies of the MICADAS system at ETH-Zurich, led to the construction of a new 220-years long tree-ring site chronology and to high resolution, highly reliable and with a tight error range radiocarbon ages. At the moment, it results impossible to absolutely date this radiocarbon sequence by the comparison of Δ14C of the trees and 10 Be fluctuations from the ice-cores. For this reason, tree growth analysis, comparisons with a living pine stand and forest-fires history reconstruction have made it possible to hypothesize site and climate characteristics useful to constrain the positioning in time of the obtained radiocarbon sequence.
Resumo:
Thanks to the Chandra and XMM–Newton surveys, the hard X-ray sky is now probed down to a flux limit where the bulk of the X-ray background is almost completely resolved into discrete sources, at least in the 2–8 keV band. Extensive programs of multiwavelength follow-up observations showed that the large majority of hard X–ray selected sources are identified with Active Galactic Nuclei (AGN) spanning a broad range of redshifts, luminosities and optical properties. A sizable fraction of relatively luminous X-ray sources hosting an active, presumably obscured, nucleus would not have been easily recognized as such on the basis of optical observations because characterized by “peculiar” optical properties. In my PhD thesis, I will focus the attention on the nature of two classes of hard X-ray selected “elusive” sources: those characterized by high X-ray-to-optical flux ratios and red optical-to-near-infrared colors, a fraction of which associated with Type 2 quasars, and the X-ray bright optically normal galaxies, also known as XBONGs. In order to characterize the properties of these classes of elusive AGN, the datasets of several deep and large-area surveys have been fully exploited. The first class of “elusive” sources is characterized by X-ray-to-optical flux ratios (X/O) significantly higher than what is generally observed from unobscured quasars and Seyfert galaxies. The properties of well defined samples of high X/O sources detected at bright X–ray fluxes suggest that X/O selection is highly efficient in sampling high–redshift obscured quasars. At the limits of deep Chandra surveys (∼10−16 erg cm−2 s−1), high X/O sources are generally characterized by extremely faint optical magnitudes, hence their spectroscopic identification is hardly feasible even with the largest telescopes. In this framework, a detailed investigation of their X-ray properties may provide useful information on the nature of this important component of the X-ray source population. The X-ray data of the deepest X-ray observations ever performed, the Chandra deep fields, allows us to characterize the average X-ray properties of the high X/O population. The results of spectral analysis clearly indicate that the high X/O sources represent the most obscured component of the X–ray background. Their spectra are harder (G ∼ 1) than any other class of sources in the deep fields and also of the XRB spectrum (G ≈ 1.4). In order to better understand the AGN physics and evolution, a much better knowledge of the redshift, luminosity and spectral energy distributions (SEDs) of elusive AGN is of paramount importance. The recent COSMOS survey provides the necessary multiwavelength database to characterize the SEDs of a statistically robust sample of obscured sources. The combination of high X/O and red-colors offers a powerful tool to select obscured luminous objects at high redshift. A large sample of X-ray emitting extremely red objects (R−K >5) has been collected and their optical-infrared properties have been studied. In particular, using an appropriate SED fitting procedure, the nuclear and the host galaxy components have been deconvolved over a large range of wavelengths and ptical nuclear extinctions, black hole masses and Eddington ratios have been estimated. It is important to remark that the combination of hard X-ray selection and extreme red colors is highly efficient in picking up highly obscured, luminous sources at high redshift. Although the XBONGs do not present a new source population, the interest on the nature of these sources has gained a renewed attention after the discovery of several examples from recent Chandra and XMM–Newton surveys. Even though several possibilities were proposed in recent literature to explain why a relatively luminous (LX = 1042 − 1043erg s−1) hard X-ray source does not leave any significant signature of its presence in terms of optical emission lines, the very nature of XBONGs is still subject of debate. Good-quality photometric near-infrared data (ISAAC/VLT) of 4 low-redshift XBONGs from the HELLAS2XMMsurvey have been used to search for the presence of the putative nucleus, applying the surface-brightness decomposition technique. In two out of the four sources, the presence of a nuclear weak component hosted by a bright galaxy has been revealed. The results indicate that moderate amounts of gas and dust, covering a large solid angle (possibly 4p) at the nuclear source, may explain the lack of optical emission lines. A weak nucleus not able to produce suffcient UV photons may provide an alternative or additional explanation. On the basis of an admittedly small sample, we conclude that XBONGs constitute a mixed bag rather than a new source population. When the presence of a nucleus is revealed, it turns out to be mildly absorbed and hosted by a bright galaxy.
Resumo:
In this Thesis, we investigate the cosmological co-evolution of supermassive black holes (BHs), Active Galactic Nuclei (AGN) and their hosting dark matter (DM) halos and galaxies, within the standard CDM scenario. We analyze both analytic, semi-analytic and hybrid techniques and use the most recent observational data available to constrain the assumptions underlying our models. First, we focus on very simple analytic models where the assembly of BHs is directly related to the merger history of DM haloes. For this purpose, we implement the two original analytic models of Wyithe & Loeb 2002 and Wyithe & Loeb 2003, compare their predictions to the AGN luminosity function and clustering data, and discuss possible modifications to the models that improve the match to the observation. Then we study more sophisticated semi-analytic models in which however the baryonic physics is neglected as well. Finally we improve the hybrid simulation of De Lucia & Blaizot 2007, adding new semi-analytical prescriptions to describe the BH mass accretion rate during each merger event and its conversion into radiation, and compare the derived BH scaling relations, fundamental plane and mass function, and the AGN luminosity function with observations. All our results support the following scenario: • The cosmological co-evolution of BHs, AGN and galaxies can be well described within the CDM model. • At redshifts z & 1, the evolution history of DM halo fully determines the overall properties of the BH and AGN populations. The AGN emission is triggered mainly by DM halo major mergers and, on average, AGN shine at their Eddington luminosity. • At redshifts z . 1, BH growth decouples from halo growth. Galaxy major mergers cannot constitute the only trigger to accretion episodes in this phase. • When a static hot halo has formed around a galaxy, a fraction of the hot gas continuously accretes onto the central BH, causing a low-energy “radio” activity at the galactic centre, which prevents significant gas cooling and thus limiting the mass of the central galaxies and quenching the star formation at late time. • The cold gas fraction accreted by BHs at high redshifts seems to be larger than at low redshifts.
Resumo:
This thesis is focused on the metabolomic study of human cancer tissues by ex vivo High Resolution-Magic Angle Spinning (HR-MAS) nuclear magnetic resonance (NMR) spectroscopy. This new technique allows for the acquisition of spectra directly on intact tissues (biopsy or surgery), and it has become very important for integrated metabonomics studies. The objective is to identify metabolites that can be used as markers for the discrimination of the different types of cancer, for the grading, and for the assessment of the evolution of the tumour. Furthermore, an attempt to recognize metabolites, that although involved in the metabolism of tumoral tissues in low concentration, can be important modulators of neoplastic proliferation, was performed. In addition, NMR data was integrated with statistical techniques in order to obtain semi-quantitative information about the metabolite markers. In the case of gliomas, the NMR study was correlated with gene expression of neoplastic tissues. Chapter 1 begins with a general description of a new “omics” study, the metabolomics. The study of metabolism can contribute significantly to biomedical research and, ultimately, to clinical medical practice. This rapidly developing discipline involves the study of the metabolome: the total repertoire of small molecules present in cells, tissues, organs, and biological fluids. Metabolomic approaches are becoming increasingly popular in disease diagnosis and will play an important role on improving our understanding of cancer mechanism. Chapter 2 addresses in more detail the basis of NMR Spectroscopy, presenting the new HR-MAS NMR tool, that is gaining importance in the examination of tumour tissues, and in the assessment of tumour grade. Some advanced chemometric methods were used in an attempt to enhance the interpretation and quantitative information of the HR-MAS NMR data are and presented in chapter 3. Chemometric methods seem to have a high potential in the study of human diseases, as it permits the extraction of new and relevant information from spectroscopic data, allowing a better interpretation of the results. Chapter 4 reports results obtained from HR-MAS NMR analyses performed on different brain tumours: medulloblastoma, meningioms and gliomas. The medulloblastoma study is a case report of primitive neuroectodermal tumor (PNET) localised in the cerebellar region by Magnetic Resonance Imaging (MRI) in a 3-year-old child. In vivo single voxel 1H MRS shows high specificity in detecting the main metabolic alterations in the primitive cerebellar lesion; which consist of very high amounts of the choline-containing compounds and of very low levels of creatine derivatives and N-acetylaspartate. Ex vivo HR-MAS NMR, performed at 9.4 Tesla on the neoplastic specimen collected during surgery, allows the unambiguous identification of several metabolites giving a more in-depth evaluation of the metabolic pattern of the lesion. The ex vivo HR-MAS NMR spectra show higher detail than that obtained in vivo. In addition, the spectroscopic data appear to correlate with some morphological features of the medulloblastoma. The present study shows that ex vivo HR-MAS 1H NMR is able to strongly improve the clinical possibility of in vivo MRS and can be used in conjunction with in vivo spectroscopy for clinical purposes. Three histological subtypes of meningiomas (meningothelial, fibrous and oncocytic) were analysed both by in vivo and ex vivo MRS experiments. The ex vivo HR-MAS investigations are very helpful for the assignment of the in vivo resonances of human meningiomas and for the validation of the quantification procedure of in vivo MR spectra. By using one- and two dimensional experiments, several metabolites in different histological subtypes of meningiomas, were identified. The spectroscopic data confirmed the presence of the typical metabolites of these benign neoplasms and, at the same time, that meningomas with different morphological characteristics have different metabolic profiles, particularly regarding macromolecules and lipids. The profile of total choline metabolites (tCho) and the expression of the Kennedy pathway genes in biopsies of human gliomas were also investigated using HR-MAS NMR, and microfluidic genomic cards. 1H HR-MAS spectra, allowed the resolution and relative quantification by LCModel of the resonances from choline (Cho), phosphorylcholine (PC) and glycerolphorylcholine (GPC), the three main components of the combined tCho peak observed in gliomas by in vivo 1H MRS spectroscopy. All glioma biopsies depicted an increase in tCho as calculated from the addition of Cho, PC and GPC HR-MAS resonances. However, the increase was constantly derived from augmented GPC in low grade NMR gliomas or increased PC content in the high grade gliomas, respectively. This circumstance allowed the unambiguous discrimination of high and low grade gliomas by 1H HR-MAS, which could not be achieved by calculating the tCho/Cr ratio commonly used by in vivo 1H MR spectroscopy. The expression of the genes involved in choline metabolism was investigated in the same biopsies. The present findings offer a convenient procedure to classify accurately glioma grade using 1H HR-MAS, providing in addition the genetic background for the alterations of choline metabolism observed in high and low gliomas grade. Chapter 5 reports the study on human gastrointestinal tract (stomach and colon) neoplasms. The human healthy gastric mucosa, and the characteristics of the biochemical profile of human gastric adenocarcinoma in comparison with that of healthy gastric mucosa were analyzed using ex vivo HR-MAS NMR. Healthy human mucosa is mainly characterized by the presence of small metabolites (more than 50 identified) and macromolecules. The adenocarcinoma spectra were dominated by the presence of signals due to triglycerides, that are usually very low in healthy gastric mucosa. The use of spin-echo experiments enable us to detect some metabolites in the unhealthy tissues and to determine their variation with respect to the healthy ones. Then, the ex vivo HR-MAS NMR analysis was applied to human gastric tissue, to obtain information on the molecular steps involved in the gastric carcinogenesis. A microscopic investigation was also carried out in order to identify and locate the lipids in the cellular and extra-cellular environments. Correlation of the morphological changes detected by transmission (TEM) and scanning (SEM) electron microscopy, with the metabolic profile of gastric mucosa in healthy, gastric atrophy autoimmune diseases (AAG), Helicobacter pylori-related gastritis and adenocarcinoma subjects, were obtained. These ultrastructural studies of AAG and gastric adenocarcinoma revealed lipid intra- and extra-cellularly accumulation associated with a severe prenecrotic hypoxia and mitochondrial degeneration. A deep insight into the metabolic profile of human healthy and neoplastic colon tissues was gained using ex vivo HR-MAS NMR spectroscopy in combination with multivariate methods: Principal Component Analysis (PCA) and Partial Least Squares Discriminant Analysis (PLS-DA). The NMR spectra of healthy tissues highlight different metabolic profiles with respect to those of neoplastic and microscopically normal colon specimens (these last obtained at least 15 cm far from the adenocarcinoma). Furthermore, metabolic variations are detected not only for neoplastic tissues with different histological diagnosis, but also for those classified identical by histological analysis. These findings suggest that the same subclass of colon carcinoma is characterized, at a certain degree, by metabolic heterogeneity. The statistical multivariate approach applied to the NMR data is crucial in order to find metabolic markers of the neoplastic state of colon tissues, and to correctly classify the samples. Significant different levels of choline containing compounds, taurine and myoinositol, were observed. Chapter 6 deals with the metabolic profile of normal and tumoral renal human tissues obtained by ex vivo HR-MAS NMR. The spectra of human normal cortex and medulla show the presence of differently distributed osmolytes as markers of physiological renal condition. The marked decrease or disappearance of these metabolites and the high lipid content (triglycerides and cholesteryl esters) is typical of clear cell renal carcinoma (RCC), while papillary RCC is characterized by the absence of lipids and very high amounts of taurine. This research is a contribution to the biochemical classification of renal neoplastic pathologies, especially for RCCs, which can be evaluated by in vivo MRS for clinical purposes. Moreover, these data help to gain a better knowledge of the molecular processes envolved in the onset of renal carcinogenesis.
Resumo:
It is not unknown that the evolution of firm theories has been developed along a path paved by an increasing awareness of the organizational structure importance. From the early “neoclassical” conceptualizations that intended the firm as a rational actor whose aim is to produce that amount of output, given the inputs at its disposal and in accordance to technological or environmental constraints, which maximizes the revenue (see Boulding, 1942 for a past mid century state of the art discussion) to the knowledge based theory of the firm (Nonaka & Takeuchi, 1995; Nonaka & Toyama, 2005), which recognizes in the firm a knnowledge creating entity, with specific organizational capabilities (Teece, 1996; Teece & Pisano, 1998) that allow to sustaine competitive advantages. Tracing back a map of the theory of the firm evolution, taking into account the several perspectives adopted in the history of thought, would take the length of many books. Because of that a more fruitful strategy is circumscribing the focus of the description of the literature evolution to one flow connected to a crucial question about the nature of firm’s behaviour and about the determinants of competitive advantages. In so doing I adopt a perspective that allows me to consider the organizational structure of the firm as an element according to which the different theories can be discriminated. The approach adopted starts by considering the drawbacks of the standard neoclassical theory of the firm. Discussing the most influential theoretical approaches I end up with a close examination of the knowledge based perspective of the firm. Within this perspective the firm is considered as a knowledge creating entity that produce and mange knowledge (Nonaka, Toyama, & Nagata, 2000; Nonaka & Toyama, 2005). In a knowledge intensive organization, knowledge is clearly embedded for the most part in the human capital of the individuals that compose such an organization. In a knowledge based organization, the management, in order to cope with knowledge intensive productions, ought to develop and accumulate capabilities that shape the organizational forms in a way that relies on “cross-functional processes, extensive delayering and empowerment” (Foss 2005, p.12). This mechanism contributes to determine the absorptive capacity of the firm towards specific technologies and, in so doing, it also shape the technological trajectories along which the firm moves. After having recognized the growing importance of the firm’s organizational structure in the theoretical literature concerning the firm theory, the subsequent point of the analysis is that of providing an overview of the changes that have been occurred at micro level to the firm’s organization of production. The economic actors have to deal with challenges posed by processes of internationalisation and globalization, increased and increasing competitive pressure of less developed countries on low value added production activities, changes in technologies and increased environmental turbulence and volatility. As a consequence, it has been widely recognized that the main organizational models of production that fitted well in the 20th century are now partially inadequate and processes aiming to reorganize production activities have been widespread across several economies in recent years. Recently, the emergence of a “new” form of production organization has been proposed both by scholars, practitioners and institutions: the most prominent characteristic of such a model is its recognition of the importance of employees commitment and involvement. As a consequence it is characterized by a strong accent on the human resource management and on those practices that aim to widen the autonomy and responsibility of the workers as well as increasing their commitment to the organization (Osterman, 1994; 2000; Lynch, 2007). This “model” of production organization is by many defined as High Performance Work System (HPWS). Despite the increasing diffusion of workplace practices that may be inscribed within the concept of HPWS in western countries’ companies, it is an hazard, to some extent, to speak about the emergence of a “new organizational paradigm”. The discussion about organizational changes and the diffusion of HPWP the focus cannot abstract from a discussion about the industrial relations systems, with a particular accent on the employment relationships, because of their relevance, in the same way as production organization, in determining two major outcomes of the firm: innovation and economic performances. The argument is treated starting from the issue of the Social Dialogue at macro level, both in an European perspective and Italian perspective. The model of interaction between the social parties has repercussions, at micro level, on the employment relationships, that is to say on the relations between union delegates and management or workers and management. Finding economic and social policies capable of sustaining growth and employment within a knowledge based scenario is likely to constitute the major challenge for the next generation of social pacts, which are the main social dialogue outcomes. As Acocella and Leoni (2007) put forward the social pacts may constitute an instrument to trade wage moderation for high intensity in ICT, organizational and human capital investments. Empirical evidence, especially focused on the micro level, about the positive relation between economic growth and new organizational designs coupled with ICT adoption and non adversarial industrial relations is growing. Partnership among social parties may become an instrument to enhance firm competitiveness. The outcome of the discussion is the integration of organizational changes and industrial relations elements within a unified framework: the HPWS. Such a choice may help in disentangling the potential existence of complementarities between these two aspects of the firm internal structure on economic and innovative performance. With the third chapter starts the more original part of the thesis. The data utilized in order to disentangle the relations between HPWS practices, innovation and economic performance refer to the manufacturing firms of the Reggio Emilia province with more than 50 employees. The data have been collected through face to face interviews both to management (199 respondents) and to union representatives (181 respondents). Coupled with the cross section datasets a further data source is constituted by longitudinal balance sheets (1994-2004). Collecting reliable data that in turn provide reliable results needs always a great effort to which are connected uncertain results. Data at micro level are often subjected to a trade off: the wider is the geographical context to which the population surveyed belong the lesser is the amount of information usually collected (low level of resolution); the narrower is the focus on specific geographical context, the higher is the amount of information usually collected (high level of resolution). For the Italian case the evidence about the diffusion of HPWP and their effects on firm performances is still scanty and usually limited to local level studies (Cristini, et al., 2003). The thesis is also devoted to the deepening of an argument of particular interest: the existence of complementarities between the HPWS practices. It has been widely shown by empirical evidence that when HPWP are adopted in bundles they are more likely to impact on firm’s performances than when adopted in isolation (Ichniowski, Prennushi, Shaw, 1997). Is it true also for the local production system of Reggio Emilia? The empirical analysis has the precise aim of providing evidence on the relations between the HPWS dimensions and the innovative and economic performances of the firm. As far as the first line of analysis is concerned it must to be stressed the fundamental role that innovation plays in the economy (Geroski & Machin, 1993; Stoneman & Kwoon 1994, 1996; OECD, 2005; EC, 2002). On this point the evidence goes from the traditional innovations, usually approximated by R&D investment expenditure or number of patents, to the introduction and adoption of ICT, in the recent years (Brynjolfsson & Hitt, 2000). If innovation is important then it is critical to analyse its determinants. In this work it is hypothesised that organizational changes and firm level industrial relations/employment relations aspects that can be put under the heading of HPWS, influence the propensity to innovate in product, process and quality of the firm. The general argument may goes as follow: changes in production management and work organization reconfigure the absorptive capacity of the firm towards specific technologies and, in so doing, they shape the technological trajectories along which the firm moves; cooperative industrial relations may lead to smother adoption of innovations, because not contrasted by unions. From the first empirical chapter emerges that the different types of innovations seem to respond in different ways to the HPWS variables. The underlying processes of product, process and quality innovations are likely to answer to different firm’s strategies and needs. Nevertheless, it is possible to extract some general results in terms of the most influencing HPWS factors on innovative performance. The main three aspects are training coverage, employees involvement and the diffusion of bonuses. These variables show persistent and significant relations with all the three innovation types. The same do the components having such variables at their inside. In sum the aspects of the HPWS influence the propensity to innovate of the firm. At the same time, emerges a quite neat (although not always strong) evidence of complementarities presence between HPWS practices. In terns of the complementarity issue it can be said that some specific complementarities exist. Training activities, when adopted and managed in bundles, are related to the propensity to innovate. Having a sound skill base may be an element that enhances the firm’s capacity to innovate. It may enhance both the capacity to absorbe exogenous innovation and the capacity to endogenously develop innovations. The presence and diffusion of bonuses and the employees involvement also spur innovative propensity. The former because of their incentive nature and the latter because direct workers participation may increase workers commitment to the organizationa and thus their willingness to support and suggest inovations. The other line of analysis provides results on the relation between HPWS and economic performances of the firm. There have been a bulk of international empirical studies on the relation between organizational changes and economic performance (Black & Lynch 2001; Zwick 2004; Janod & Saint-Martin 2004; Huselid 1995; Huselid & Becker 1996; Cappelli & Neumark 2001), while the works aiming to capture the relations between economic performance and unions or industrial relations aspects are quite scant (Addison & Belfield, 2001; Pencavel, 2003; Machin & Stewart, 1990; Addison, 2005). In the empirical analysis the integration of the two main areas of the HPWS represent a scarcely exploited approach in the panorama of both national and international empirical studies. As remarked by Addison “although most analysis of workers representation and employee involvement/high performance work practices have been conducted in isolation – while sometimes including the other as controls – research is beginning to consider their interactions” (Addison, 2005, p.407). The analysis conducted exploiting temporal lags between dependent and covariates, possibility given by the merger of cross section and panel data, provides evidence in favour of the existence of HPWS practices impact on firm’s economic performance, differently measured. Although it does not seem to emerge robust evidence on the existence of complementarities among HPWS aspects on performances there is evidence of a general positive influence of the single practices. The results are quite sensible to the time lags, inducing to hypothesize that time varying heterogeneity is an important factor in determining the impact of organizational changes on economic performance. The implications of the analysis can be of help both to management and local level policy makers. Although the results are not simply extendible to other local production systems it may be argued that for contexts similar to the Reggio Emilia province, characterized by the presence of small and medium enterprises organized in districts and by a deep rooted unionism, with strong supporting institutions, the results and the implications here obtained can also fit well. However, a hope for future researches on the subject treated in the present work is that of collecting good quality information over wider geographical areas, possibly at national level, and repeated in time. Only in this way it is possible to solve the Gordian knot about the linkages between innovation, performance, high performance work practices and industrial relations.
Resumo:
This study aims at analysing Brian O'Nolans literary production in the light of a reconsideration of the role played by his two most famous pseudonyms ,Flann Brien and Myles na Gopaleen, behind which he was active both as a novelist and as a journalist. We tried to establish a new kind of relationship between them and their empirical author following recent cultural and scientific surveys in the field of Humour Studies, Psychology, and Sociology: taking as a starting point the appreciation of the comic attitude in nature and in cultural history, we progressed through a short history of laughter and derision, followed by an overview on humour theories. After having established such a frame, we considered an integration of scientific studies in the field of laughter and humour as a base for our study scheme, in order to come to a definition of the comic author as a recognised, powerful and authoritative social figure who acts as a critic of conventions. The history of laughter and comic we briefly summarized, based on the one related by the French scholar Georges Minois in his work (Minois 2004), has been taken into account in the view that humorous attitude is one of manâs characteristic traits always present and witnessed throughout the ages, though subject in most cases to repression by cultural and political conservative power. This sort of Super-Ego notwithstanding, or perhaps because of that, comic impulse proved irreducible exactly in its influence on the current cultural debates. Basing mainly on Robert R. Provineâs (Provine 2001), Fabio Ceccarelliâs (Ceccarelli 1988), Arthur Koestlerâs (Koestler 1975) and Peter L. Bergerâs (Berger 1995) scientific essays on the actual occurrence of laughter and smile in complex social situations, we underlined the many evidences for how the use of comic, humour and wit (in a Freudian sense) could be best comprehended if seen as a common mind process designed for the improvement of knowledge, in which we traced a strict relation with the play-element the Dutch historian Huizinga highlighted in his famous essay, Homo Ludens (Huizinga 1955). We considered comic and humour/wit as different sides of the same coin, and showed how the demonstrations scientists provided on this particular subject are not conclusive, given that the mental processes could not still be irrefutably shown to be separated as regards graduations in comic expression and reception: in fact, different outputs in expressions might lead back to one and the same production process, following the general âEconomy Ruleâ of evolution; man is the only animal who lies, meaning with this that one feeling is not necessarily biuniquely associated with one and the same outward display, so human expressions are not validation proofs for feelings. Considering societies, we found that in nature they are all organized in more or less the same way, that is, in élites who govern over a community who, in turn, recognizes them as legitimate delegates for that task; we inferred from this the epistemological possibility for the existence of an added ruling figure alongside those political and religious: this figure being the comic, who is the person in charge of expressing true feelings towards given subjects of contention. Any community owns one, and his very peculiar status is validated by the fact that his place is within the community, living in it and speaking to it, but at the same time is outside it in the sense that his action focuses mainly on shedding light on ideas and objects placed out-side the boundaries of social convention: taboos, fears, sacred objects and finally culture are the favourite targets of the comic personâs arrow. This is the reason for the word a(rche)typical as applied to the comic figure in society: atypical in a sense, because unconventional and disrespectful of traditions, critical and never at ease with unblinkered respect of canons; archetypical, because the âvillage foolâ, buffoon, jester or anyone in any kind of society who plays such roles, is an archetype in the Jungian sense, i.e. a personification of an irreducible side of human nature that everybody instinctively knows: a beginner of a tradition, the perfect type, what is most conventional of all and therefore the exact opposite of an atypical. There is an intrinsic necessity, we think, of such figures in societies, just like politicians and priests, who should play an elitist role in order to guide and rule not for their own benefit but for the good of the community. We are not naïve and do know that actual owners of power always tend to keep it indefinitely: the âsocial comicâ as a role of power has nonetheless the distinctive feature of being the only job whose tension is not towards stability. It has got in itself the rewarding permission of contradiction, for the very reason we exposed before that the comic must cast an eye both inside and outside society and his vision may be perforce not consistent, then it is satisfactory for the popularity that gives amongst readers and audience. Finally, the difference between governors, priests and comic figures is the seriousness of the first two (fundamentally monologic) and the merry contradiction of the third (essentially dialogic). MPs, mayors, bishops and pastors should always console, comfort and soothe popular mood in respect of the public convention; the comic has the opposite task of provoking, urging and irritating, accomplishing at the same time a sort of control of the soothing powers of society, keepers of the righteousness. In this view, the comic person assumes a paramount importance in the counterbalancing of power administration, whether in form of acting in public places or in written pieces which could circulate for private reading. At this point comes into question our Irish writer Brian O'Nolan(1911-1966), real name that stood behind the more famous masks of Flann O'Brien, novelist, author of At Swim-Two-Birds (1939), The Hard Life (1961), The Dalkey Archive (1964) and, posthumously, The Third Policeman (1967); and of Myles na Gopaleen, journalist, keeper for more than 25 years of the Cruiskeen Lawn column on The Irish Times (1940-1966), and author of the famous book-parody in Irish An Béal Bocht (1941), later translated in English as The Poor Mouth (1973). Brian O'Nolan, professional senior civil servant of the Republic, has never seen recognized his authorship in literary studies, since all of them concentrated on his alter egos Flann, Myles and some others he used for minor contributions. So far as we are concerned, we think this is the first study which places the real name in the title, this way acknowledging him an unity of intents that no-one before did. And this choice in titling is not a mere mark of distinction for the sake of it, but also a wilful sign of how his opus should now be reconsidered. In effect, the aim of this study is exactly that of demonstrating how the empirical author Brian O'Nolan was the real Deus in machina, the master of puppets who skilfully directed all of his identities in planned directions, so as to completely fulfil the role of the comic figure we explained before. Flann O'Brien and Myles na Gopaleen were personae and not persons, but the impression one gets from the critical studies on them is the exact opposite. Literary consideration, that came only after O'Nolans death, began with Anne Clissmannâs work, Flann O'Brien: A Critical Introduction to His Writings (Clissmann 1975), while the most recent book is Keith Donohueâs The Irish Anatomist: A Study of Flann O'Brien (Donohue 2002); passing through M.Keith Bookerâs Flann O'Brien, Bakhtin and Menippean Satire (Booker 1995), Keith Hopperâs Flann O'Brien: A Portrait of the Artist as a Young Post-Modernist (Hopper 1995) and Monique Gallagherâs Flann O'Brien, Myles et les autres (Gallagher 1998). There have also been a couple of biographies, which incidentally somehow try to explain critical points his literary production, while many critical studies do the same on the opposite side, trying to found critical points of view on the authorâs restless life and habits. At this stage, we attempted to merge into O'Nolan's corpus the journalistic articles he wrote, more than 4,200, for roughly two million words in the 26-year-old running of the column. To justify this, we appealed to several considerations about the figure O'Nolan used as writer: Myles na Gopaleen (later simplified in na Gopaleen), who was the equivalent of the street artist or storyteller, speaking to his imaginary public and trying to involve it in his stories, quarrels and debates of all kinds. First of all, he relied much on language for the reactions he would obtain, playing on, and with, words so as to ironically unmask untrue relationships between words and things. Secondly, he pushed to the limit the convention of addressing to spectators and listeners usually employed in live performing, stretching its role in the written discourse to come to a greater effect of involvement of readers. Lastly, he profited much from what we labelled his âspecific weightâ, i.e. the potential influence in society given by his recognised authority in determined matters, a position from which he could launch deeper attacks on conventional beliefs, so complying with the duty of a comic we hypothesised before: that of criticising society even in threat of losing the benefits the post guarantees. That seemingly masochistic tendency has its rationale. Every representative has many privileges on the assumption that he, or she, has great responsibilities in administrating. The higher those responsibilities are, the higher is the reward but also the severer is the punishment for the misfits done while in charge. But we all know that not everybody accepts the rules and many try to use their power for their personal benefit and do not want to undergo lawâs penalties. The comic, showing in this case more civic sense than others, helped very much in this by the non-accessibility to the use of public force, finds in the role of the scapegoat the right accomplishment of his task, accepting the punishment when his breaking of the conventions is too stark to be forgiven. As Ceccarelli demonstrated, the role of the object of laughter (comic, ridicule) has its very own positive side: there is freedom of expression for the person, and at the same time integration in the society, even though at low levels. Then the banishment of a âsocialâ comic can never get to total extirpation from society, revealing how the scope of the comic lies on an entirely fictional layer, bearing no relation with facts, nor real consequences in terms of physical health. Myles na Gopaleen, mastering these three characteristics we postulated in the highest way, can be considered an author worth noting; and the oeuvre he wrote, the whole collection of Cruiskeen Lawn articles, is rightfully a novel because respects the canons of it especially regarding the authorial figure and his relationship with the readers. In addition, his work can be studied even if we cannot conduct our research on the whole of it, this proceeding being justified exactly because of the resemblances to the real figure of the storyteller: its âchaptersâ âthe daily articlesâ had a format that even the distracted reader could follow, even one who did not read each and every article before. So we can critically consider also a good part of them, as collected in the seven volumes published so far, with the addition of some others outside the collections, because completeness in this case is not at all a guarantee of a better precision in the assessment; on the contrary: examination of the totality of articles might let us consider him as a person and not a persona. Once cleared these points, we proceeded further in considering tout court the works of Brian O'Nolan as the works of a unique author, rather than complicating the references with many names which are none other than well-wrought sides of the same personality. By putting O'Nolan as the correct object of our research, empirical author of the works of the personae Flann O'Brien and Myles na Gopaleen, there comes out a clearer literary landscape: the comic author Brian O'Nolan, self-conscious of his paramount role in society as both a guide and a scourge, in a word as an a(rche)typical, intentionally chose to differentiate his personalities so as to create different perspectives in different fields of knowledge by using, in addition, different means of communication: novels and journalism. We finally compared the newly assessed author Brian O'Nolan with other great Irish comic writers in English, such as James Joyce (the one everybody named as the master in the field), Samuel Beckett, and Jonathan Swift. This comparison showed once more how O'Nolan is in no way inferior to these authors who, greatly celebrated by critics, have nonetheless failed to achieve that great public recognition OâNolan received alias Myles, awarded by the daily audience he reached and influenced with his Cruiskeen Lawn column. For this reason, we believe him to be representative of the comic figureâs function as a social regulator and as a builder of solidarity, such as that Raymond Williams spoke of in his work (Williams 1982), with in mind the aim of building a âculture in commonâ. There is no way for a âculture in commonâ to be acquired if we do not accept the fact that even the most functional society rests on conventions, and in a world more and more âconnectedâ we need someone to help everybody negotiate with different cultures and persons. The comic gives us a worldly perspective which is at the same time comfortable and distressing but in the end not harmful as the one furnished by politicians could be: he lets us peep into parallel worlds without moving too far from our armchair and, as a consequence, is the one who does his best for the improvement of our understanding of things.
Resumo:
In the recent years TNFRSF13B coding variants have been implicated by clinical genetics studies in Common Variable Immunodeficiency (CVID), the most common clinically relevant primary immunodeficiency in individuals of European ancestry, but their functional effects in relation to the development of the disease have not been entirely established. To examine the potential contribution of such variants to CVID, the more comprehensive perspective of an evolutionary approach was applied in this study, underling the belief that evolutionary genetics methods can play a role in dissecting the origin, causes and diffusion of human diseases, representing a powerful tool also in human health research. For this purpose, TNFRSF13B coding region was sequenced in 451 healthy individuals belonging to 26 worldwide populations, in addition to 96 control, 77 CVID and 38 Selective IgA Deficiency (IgAD) individuals from Italy, leading to the first achievement of a global picture of TNFRSF13B nucleotide diversity and haplotype structure and making suggestion of its evolutionary history possible. A slow rate of evolution, within our species and when compared to the chimpanzee, low levels of genetic diversity geographical structure and the absence of recent population specific selective pressures were observed for the examined genomic region, suggesting that geographical distribution of its variability is more plausibly related to its involvement also in innate immunity rather than in adaptive immunity only. This, together with the extremely subtle disease/healthy samples differences observed, suggests that CVID might be more likely related to still unknown environmental and genetic factors, rather than to the nature of TNFRSF13B variants only.
Resumo:
In the last decade the interest for submarine instability grew up, driven by the increasing exploitation of natural resources (primary hydrocarbons), the emplacement of bottom-lying structures (cables and pipelines) and by the development of coastal areas, whose infrastructures increasingly protrude to the sea. The great interest for this topic promoted a number of international projects such as: STEAM (Sediment Transport on European Atlantic Margins, 93-96), ENAM II (European North Atlantic Margin, 96-99), GITEC (Genesis and Impact of Tsunamis on the European Coast 92-95), STRATAFORM (STRATA FORmation on Margins, 95-01), Seabed Slope Process in Deep Water Continental Margin (Northwest Gulf of Mexico, 96-04), COSTA (Continental slope Stability, 00-05), EUROMARGINS (Slope Stability on Europe’s Passive Continental Margin), SPACOMA (04-07), EUROSTRATAFORM (European Margin Strata Formation), NGI's internal project SIP-8 (Offshore Geohazards), IGCP-511: Submarine Mass Movements and Their Consequences (05-09) and projects indirectly related to instability processes, such as TRANSFER (Tsunami Risk ANd Strategies For the European region, 06-09) or NEAREST (integrated observations from NEAR shore sourcES of Tsunamis: towards an early warning system, 06-09). In Italy, apart from a national project realized within the activities of the National Group of Volcanology during the framework 2000-2003 “Conoscenza delle parti sommerse dei vulcani italiani e valutazione del potenziale rischio vulcanico”, the study of submarine mass-movement has been underestimated until the occurrence of the landslide-tsunami events that affected Stromboli on December 30, 2002. This event made the Italian Institutions and the scientific community more aware of the hazard related to submarine landslides, mainly in light of the growing anthropization of coastal sectors, that increases the vulnerability of these areas to the consequences of such processes. In this regard, two important national projects have been recently funded in order to study coastal instabilities (PRIN 24, 06-08) and to map the main submarine hazard features on continental shelves and upper slopes around the most part of Italian coast (MaGIC Project). The study realized in this Thesis is addressed to the understanding of these processes, with particular reference to Stromboli submerged flanks. These latter represent a natural laboratory in this regard, as several kind of instability phenomena are present on the submerged flanks, affecting about 90% of the entire submerged areal and often (strongly) influencing the morphological evolution of subaerial slopes, as witnessed by the event occurred on 30 December 2002. Furthermore, each phenomenon is characterized by different pre-failure, failure and post-failure mechanisms, ranging from rock-falls, to turbidity currents up to catastrophic sector collapses. The Thesis is divided into three introductive chapters, regarding a brief review of submarine instability phenomena and related hazard (cap. 1), a “bird’s-eye” view on methodologies and available dataset (cap. 2) and a short introduction on the evolution and the morpho-structural setting of the Stromboli edifice (cap. 3). This latter seems to play a major role in the development of largescale sector collapses at Stromboli, as they occurred perpendicular to the orientation of the main volcanic rift axis (oriented in NE-SW direction). The characterization of these events and their relationships with successive erosive-depositional processes represents the main focus of cap.4 (Offshore evidence of large-scale lateral collapses on the eastern flank of Stromboli, Italy, due to structurally-controlled, bilateral flank instability) and cap. 5 (Lateral collapses and active sedimentary processes on the North-western flank of Stromboli Volcano), represented by articles accepted for publication on international papers (Marine Geology). Moreover, these studies highlight the hazard related to these catastrophic events; several calamities (with more than 40000 casualties only in the last two century) have been, in fact, the direct or indirect result of landslides affecting volcanic flanks, as observed at Oshima-Oshima (1741) and Unzen Volcano (1792) in Japan (Satake&Kato, 2001; Brantley&Scott, 1993), Krakatau (1883) in Indonesia (Self&Rampino, 1981), Ritter Island (1888), Sissano in Papua New Guinea (Ward& Day, 2003; Johnson, 1987; Tappin et al., 2001) and Mt St. Augustine (1883) in Alaska (Beget& Kienle, 1992). Flank landslide are also recognized as the most important and efficient mass-wasting process on volcanoes, contributing to the development of the edifices by widening their base and to the growth of a volcaniclastic apron at the foot of a volcano; a number of small and medium-scale erosive processes are also responsible for the carving of Stromboli submarine flanks and the transport of debris towards the deeper areas. The characterization of features associated to these processes is the main focus of cap. 6; it is also important to highlight that some small-scale events are able to create damage to coastal areas, as also witnessed by recent events of Gioia Tauro 1978, Nizza, 1979 and Stromboli 2002. The hazard potential related to these phenomena is, in fact, very high, as they commonly occur at higher frequency with respect to large-scale collapses, therefore being more significant in terms of human timescales. In the last chapter (cap. 7), a brief review and discussion of instability processes identified on Stromboli submerged flanks is presented; they are also compared with respect to analogous processes recognized in other submerged areas in order to shed lights on the main factors involved in their development. Finally, some applications of multibeam data to assess the hazard related to these phenomena are also discussed.
Resumo:
Age-related physiological changes in the gastrointestinal tract, as well as modification in lifestyle, nutritional behaviour, and functionality of the host immune system, inevitably affect the gut microbiota. The study presented here is focused on the application and comparison of two different microarray approaches for the characterization of the human gut microbiota, the HITChip and the HTF-Microb.Array, with particular attention to the effects of the aging process on the composition of this ecosystem. By using the Human Intestinal Tract Chip (HITChip), recently developed at the Wageningen University, The Netherland, we explored the age-related changes of gut microbiota during the whole adult lifespan, from young adults, through elderly to centenarians. We observed that the microbial composition and diversity of the gut ecosystem of young adults and seventy-years old people is highly similar but differs significantly from that of the centenarians. After 100 years of symbiotic association with the human host, the microbiota is characterized by a rearrangement in the Firmicutes population and an enrichment of facultative anaerobes. The presence of such a compromised microbiota in the centenarians is associated with an increased inflammation status, also known as inflamm-aging, as determined by a range of peripheral blood inflammatory markers. In parallel, we overtook the development of our own phylogenetic microarray with a lower number of targets, aiming the description of the human gut microbiota structure at high taxonomic level. The resulting chip was called High Taxonomic level Fingerprinting Microbiota Array (HTF-Microb.Array), and was based on the Ligase Detection Reaction (LDR) technology, which allowed us to develop a fast and sensitive tool for the fingerprint of the human gut microbiota in terms of presence/absence of the principal groups. The validation on artificial DNA mixes, as well as the pilot study involving eight healthy young adults, demonstrated that the HTF-Microb.Array can be used to successfully characterize the human gut microbiota, allowing us to obtain results which are in approximate accordance with the most recent characterizations. Conversely, the evaluation of the relative abundance of the target groups on the bases of the relative fluorescence intensity probes response still has some hindrances, as demonstrated by comparing the HTF.Microb.Array and HITChip high taxonomic level fingerprints of the same centenarians.
Resumo:
This PhD thesis discusses the rationale for design and use of synthetic oligosaccharides for the development of glycoconjugate vaccines and the role of physicochemical methods in the characterization of these vaccines. The study concerns two infectious diseases that represent a serious problem for the national healthcare programs: human immunodeficiency virus (HIV) and Group A Streptococcus (GAS) infections. Both pathogens possess distinctive carbohydrate structures that have been described as suitable targets for the vaccine design. The Group A Streptococcus cell membrane polysaccharide (GAS-PS) is an attractive vaccine antigen candidate based on its conserved, constant expression pattern and the ability to confer immunoprotection in a relevant mouse model. Analysis of the immunogenic response within at-risk populations suggests an inverse correlation between high anti-GAS-PS antibody titres and GAS infection cases. Recent studies show that a chemically synthesized core polysaccharide-based antigen may represent an antigenic structural determinant of the large polysaccharide. Based on GAS-PS structural analysis, the study evaluates the potential to exploit a synthetic design approach to GAS vaccine development and compares the efficiency of synthetic antigens with the long isolated GAS polysaccharide. Synthetic GAS-PS structural analogues were specifically designed and generated to explore the impact of antigen length and terminal residue composition. For the HIV-1 glycoantigens, the dense glycan shield on the surface of the envelope protein gp120 was chosen as a target. This shield masks conserved protein epitopes and facilitates virus spread via binding to glycan receptors on susceptible host cells. The broadly neutralizing monoclonal antibody 2G12 binds a cluster of high-mannose oligosaccharides on the gp120 subunit of HIV-1 Env protein. This oligomannose epitope has been a subject to the synthetic vaccine development. The cluster nature of the 2G12 epitope suggested that multivalent antigen presentation was important to develop a carbohydrate based vaccine candidate. I describe the development of neoglycoconjugates displaying clustered HIV-1 related oligomannose carbohydrates and their immunogenic properties.
Resumo:
Healthcare, Human Computer Interfaces (HCI), Security and Biometry are the most promising application scenario directly involved in the Body Area Networks (BANs) evolution. Both wearable devices and sensors directly integrated in garments envision a word in which each of us is supervised by an invisible assistant monitoring our health and daily-life activities. New opportunities are enabled because improvements in sensors miniaturization and transmission efficiency of the wireless protocols, that achieved the integration of high computational power aboard independent, energy-autonomous, small form factor devices. Application’s purposes are various: (I) data collection to achieve off-line knowledge discovery; (II) user notification of his/her activities or in case a danger occurs; (III) biofeedback rehabilitation; (IV) remote alarm activation in case the subject need assistance; (V) introduction of a more natural interaction with the surrounding computerized environment; (VI) users identification by physiological or behavioral characteristics. Telemedicine and mHealth [1] are two of the leading concepts directly related to healthcare. The capability to borne unobtrusiveness objects supports users’ autonomy. A new sense of freedom is shown to the user, not only supported by a psychological help but a real safety improvement. Furthermore, medical community aims the introduction of new devices to innovate patient treatments. In particular, the extension of the ambulatory analysis in the real life scenario by proving continuous acquisition. The wide diffusion of emerging wellness portable equipment extended the usability of wearable devices also for fitness and training by monitoring user performance on the working task. The learning of the right execution techniques related to work, sport, music can be supported by an electronic trainer furnishing the adequate aid. HCIs made real the concept of Ubiquitous, Pervasive Computing and Calm Technology introduced in the 1988 by Marc Weiser and John Seeley Brown. They promotes the creation of pervasive environments, enhancing the human experience. Context aware, adaptive and proactive environments serve and help people by becoming sensitive and reactive to their presence, since electronics is ubiquitous and deployed everywhere. In this thesis we pay attention to the integration of all the aspects involved in a BAN development. Starting from the choice of sensors we design the node, configure the radio network, implement real-time data analysis and provide a feedback to the user. We present algorithms to be implemented in wearable assistant for posture and gait analysis and to provide assistance on different walking conditions, preventing falls. Our aim, expressed by the idea to contribute at the development of a non proprietary solutions, driven us to integrate commercial and standard solutions in our devices. We use sensors available on the market and avoided to design specialized sensors in ASIC technologies. We employ standard radio protocol and open source projects when it was achieved. The specific contributions of the PhD research activities are presented and discussed in the following. • We have designed and build several wireless sensor node providing both sensing and actuator capability making the focus on the flexibility, small form factor and low power consumption. The key idea was to develop a simple and general purpose architecture for rapid analysis, prototyping and deployment of BAN solutions. Two different sensing units are integrated: kinematic (3D accelerometer and 3D gyroscopes) and kinetic (foot-floor contact pressure forces). Two kind of feedbacks were implemented: audio and vibrotactile. • Since the system built is a suitable platform for testing and measuring the features and the constraints of a sensor network (radio communication, network protocols, power consumption and autonomy), we made a comparison between Bluetooth and ZigBee performance in terms of throughput and energy efficiency. Test in the field evaluate the usability in the fall detection scenario. • To prove the flexibility of the architecture designed, we have implemented a wearable system for human posture rehabilitation. The application was developed in conjunction with biomedical engineers who provided the audio-algorithms to furnish a biofeedback to the user about his/her stability. • We explored off-line gait analysis of collected data, developing an algorithm to detect foot inclination in the sagittal plane, during walk. • In collaboration with the Wearable Lab – ETH, Zurich, we developed an algorithm to monitor the user during several walking condition where the user carry a load. The remainder of the thesis is organized as follows. Chapter I gives an overview about Body Area Networks (BANs), illustrating the relevant features of this technology and the key challenges still open. It concludes with a short list of the real solutions and prototypes proposed by academic research and manufacturers. The domain of the posture and gait analysis, the methodologies, and the technologies used to provide real-time feedback on detected events, are illustrated in Chapter II. The Chapter III and IV, respectively, shown BANs developed with the purpose to detect fall and monitor the gait taking advantage by two inertial measurement unit and baropodometric insoles. Chapter V reports an audio-biofeedback system to improve balance on the information provided by the use centre of mass. A walking assistant based on the KNN classifier to detect walking alteration on load carriage, is described in Chapter VI.
Resumo:
The treatment of the Cerebral Palsy (CP) is considered as the “core problem” for the whole field of the pediatric rehabilitation. The reason why this pathology has such a primary role, can be ascribed to two main aspects. First of all CP is the form of disability most frequent in childhood (one new case per 500 birth alive, (1)), secondarily the functional recovery of the “spastic” child is, historically, the clinical field in which the majority of the therapeutic methods and techniques (physiotherapy, orthotic, pharmacologic, orthopedic-surgical, neurosurgical) were first applied and tested. The currently accepted definition of CP – Group of disorders of the development of movement and posture causing activity limitation (2) – is the result of a recent update by the World Health Organization to the language of the International Classification of Functioning Disability and Health, from the original proposal of Ingram – A persistent but not unchangeable disorder of posture and movement – dated 1955 (3). This definition considers CP as a permanent ailment, i.e. a “fixed” condition, that however can be modified both functionally and structurally by means of child spontaneous evolution and treatments carried out during childhood. The lesion that causes the palsy, happens in a structurally immature brain in the pre-, peri- or post-birth period (but only during the firsts months of life). The most frequent causes of CP are: prematurity, insufficient cerebral perfusion, arterial haemorrhage, venous infarction, hypoxia caused by various origin (for example from the ingestion of amniotic liquid), malnutrition, infection and maternal or fetal poisoning. In addition to these causes, traumas and malformations have to be included. The lesion, whether focused or spread over the nervous system, impairs the whole functioning of the Central Nervous System (CNS). As a consequence, they affect the construction of the adaptive functions (4), first of all posture control, locomotion and manipulation. The palsy itself does not vary over time, however it assumes an unavoidable “evolutionary” feature when during growth the child is requested to meet new and different needs through the construction of new and different functions. It is essential to consider that clinically CP is not only a direct expression of structural impairment, that is of etiology, pathogenesis and lesion timing, but it is mainly the manifestation of the path followed by the CNS to “re”-construct the adaptive functions “despite” the presence of the damage. “Palsy” is “the form of the function that is implemented by an individual whose CNS has been damaged in order to satisfy the demands coming from the environment” (4). Therefore it is only possible to establish general relations between lesion site, nature and size, and palsy and recovery processes. It is quite common to observe that children with very similar neuroimaging can have very different clinical manifestations of CP and, on the other hand, children with very similar motor behaviors can have completely different lesion histories. A very clear example of this is represented by hemiplegic forms, which show bilateral hemispheric lesions in a high percentage of cases. The first section of this thesis is aimed at guiding the interpretation of CP. First of all the issue of the detection of the palsy is treated from historical viewpoint. Consequently, an extended analysis of the current definition of CP, as internationally accepted, is provided. The definition is then outlined in terms of a space dimension and then of a time dimension, hence it is highlighted where this definition is unacceptably lacking. The last part of the first section further stresses the importance of shifting from the traditional concept of CP as a palsy of development (defect analysis) towards the notion of development of palsy, i.e., as the product of the relationship that the individual however tries to dynamically build with the surrounding environment (resource semeiotics) starting and growing from a different availability of resources, needs, dreams, rights and duties (4). In the scientific and clinic community no common classification system of CP has so far been universally accepted. Besides, no standard operative method or technique have been acknowledged to effectively assess the different disabilities and impairments exhibited by children with CP. CP is still “an artificial concept, comprising several causes and clinical syndromes that have been grouped together for a convenience of management” (5). The lack of standard and common protocols able to effectively diagnose the palsy, and as a consequence to establish specific treatments and prognosis, is mainly because of the difficulty to elevate this field to a level based on scientific evidence. A solution aimed at overcoming the current incomplete treatment of CP children is represented by the clinical systematic adoption of objective tools able to measure motor defects and movement impairments. A widespread application of reliable instruments and techniques able to objectively evaluate both the form of the palsy (diagnosis) and the efficacy of the treatments provided (prognosis), constitutes a valuable method able to validate care protocols, establish the efficacy of classification systems and assess the validity of definitions. Since the ‘80s, instruments specifically oriented to the analysis of the human movement have been advantageously designed and applied in the context of CP with the aim of measuring motor deficits and, especially, gait deviations. The gait analysis (GA) technique has been increasingly used over the years to assess, analyze, classify, and support the process of clinical decisions making, allowing for a complete investigation of gait with an increased temporal and spatial resolution. GA has provided a basis for improving the outcome of surgical and nonsurgical treatments and for introducing a new modus operandi in the identification of defects and functional adaptations to the musculoskeletal disorders. Historically, the first laboratories set up for gait analysis developed their own protocol (set of procedures for data collection and for data reduction) independently, according to performances of the technologies available at that time. In particular, the stereophotogrammetric systems mainly based on optoelectronic technology, soon became a gold-standard for motion analysis. They have been successfully applied especially for scientific purposes. Nowadays the optoelectronic systems have significantly improved their performances in term of spatial and temporal resolution, however many laboratories continue to use the protocols designed on the technology available in the ‘70s and now out-of-date. Furthermore, these protocols are not coherent both for the biomechanical models and for the adopted collection procedures. In spite of these differences, GA data are shared, exchanged and interpreted irrespectively to the adopted protocol without a full awareness to what extent these protocols are compatible and comparable with each other. Following the extraordinary advances in computer science and electronics, new systems for GA no longer based on optoelectronic technology, are now becoming available. They are the Inertial and Magnetic Measurement Systems (IMMSs), based on miniature MEMS (Microelectromechanical systems) inertial sensor technology. These systems are cost effective, wearable and fully portable motion analysis systems, these features gives IMMSs the potential to be used both outside specialized laboratories and to consecutive collect series of tens of gait cycles. The recognition and selection of the most representative gait cycle is then easier and more reliable especially in CP children, considering their relevant gait cycle variability. The second section of this thesis is focused on GA. In particular, it is firstly aimed at examining the differences among five most representative GA protocols in order to assess the state of the art with respect to the inter-protocol variability. The design of a new protocol is then proposed and presented with the aim of achieving gait analysis on CP children by means of IMMS. The protocol, named ‘Outwalk’, contains original and innovative solutions oriented at obtaining joint kinematic with calibration procedures extremely comfortable for the patients. The results of a first in-vivo validation of Outwalk on healthy subjects are then provided. In particular, this study was carried out by comparing Outwalk used in combination with an IMMS with respect to a reference protocol and an optoelectronic system. In order to set a more accurate and precise comparison of the systems and the protocols, ad hoc methods were designed and an original formulation of the statistical parameter coefficient of multiple correlation was developed and effectively applied. On the basis of the experimental design proposed for the validation on healthy subjects, a first assessment of Outwalk, together with an IMMS, was also carried out on CP children. The third section of this thesis is dedicated to the treatment of walking in CP children. Commonly prescribed treatments in addressing gait abnormalities in CP children include physical therapy, surgery (orthopedic and rhizotomy), and orthoses. The orthotic approach is conservative, being reversible, and widespread in many therapeutic regimes. Orthoses are used to improve the gait of children with CP, by preventing deformities, controlling joint position, and offering an effective lever for the ankle joint. Orthoses are prescribed for the additional aims of increasing walking speed, improving stability, preventing stumbling, and decreasing muscular fatigue. The ankle-foot orthosis (AFO), with a rigid ankle, are primarily designed to prevent equinus and other foot deformities with a positive effect also on more proximal joints. However, AFOs prevent the natural excursion of the tibio-tarsic joint during the second rocker, hence hampering the natural leaning progression of the whole body under the effect of the inertia (6). A new modular (submalleolar) astragalus-calcanear orthosis, named OMAC, has recently been proposed with the intention of substituting the prescription of AFOs in those CP children exhibiting a flat and valgus-pronated foot. The aim of this section is thus to present the mechanical and technical features of the OMAC by means of an accurate description of the device. In particular, the integral document of the deposited Italian patent, is provided. A preliminary validation of OMAC with respect to AFO is also reported as resulted from an experimental campaign on diplegic CP children, during a three month period, aimed at quantitatively assessing the benefit provided by the two orthoses on walking and at qualitatively evaluating the changes in the quality of life and motor abilities. As already stated, CP is universally considered as a persistent but not unchangeable disorder of posture and movement. Conversely to this definition, some clinicians (4) have recently pointed out that movement disorders may be primarily caused by the presence of perceptive disorders, where perception is not merely the acquisition of sensory information, but an active process aimed at guiding the execution of movements through the integration of sensory information properly representing the state of one’s body and of the environment. Children with perceptive impairments show an overall fear of moving and the onset of strongly unnatural walking schemes directly caused by the presence of perceptive system disorders. The fourth section of the thesis thus deals with accurately defining the perceptive impairment exhibited by diplegic CP children. A detailed description of the clinical signs revealing the presence of the perceptive impairment, and a classification scheme of the clinical aspects of perceptual disorders is provided. In the end, a functional reaching test is proposed as an instrumental test able to disclosure the perceptive impairment. References 1. Prevalence and characteristics of children with cerebral palsy in Europe. Dev Med Child Neurol. 2002 Set;44(9):633-640. 2. Bax M, Goldstein M, Rosenbaum P, Leviton A, Paneth N, Dan B, et al. Proposed definition and classification of cerebral palsy, April 2005. Dev Med Child Neurol. 2005 Ago;47(8):571-576. 3. Ingram TT. A study of cerebral palsy in the childhood population of Edinburgh. Arch. Dis. Child. 1955 Apr;30(150):85-98. 4. Ferrari A, Cioni G. The spastic forms of cerebral palsy : a guide to the assessment of adaptive functions. Milan: Springer; 2009. 5. Olney SJ, Wright MJ. Cerebral Palsy. Campbell S et al. Physical Therapy for Children. 2nd Ed. Philadelphia: Saunders. 2000;:533-570. 6. Desloovere K, Molenaers G, Van Gestel L, Huenaerts C, Van Campenhout A, Callewaert B, et al. How can push-off be preserved during use of an ankle foot orthosis in children with hemiplegia? A prospective controlled study. Gait Posture. 2006 Ott;24(2):142-151.