892 resultados para EXPLOITING MULTICOMMUTATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates context-aware wireless networks, capable to adapt their behavior to the context and the application, thanks to the ability of combining communication, sensing and localization. Problems of signals demodulation, parameters estimation and localization are addressed exploiting analytical methods, simulations and experimentation, for the derivation of the fundamental limits, the performance characterization of the proposed schemes and the experimental validation. Ultrawide-bandwidth (UWB) signals are in certain cases considered and non-coherent receivers, allowing the exploitation of the multipath channel diversity without adopting complex architectures, investigated. Closed-form expressions for the achievable bit error probability of novel proposed architectures are derived. The problem of time delay estimation (TDE), enabling network localization thanks to ranging measurement, is addressed from a theoretical point of view. New fundamental bounds on TDE are derived in the case the received signal is partially known or unknown at receiver side, as often occurs due to propagation or due to the adoption of low-complexity estimators. Practical estimators, such as energy-based estimators, are revised and their performance compared with the new bounds. The localization issue is addressed with experimentation for the characterization of cooperative networks. Practical algorithms able to improve the accuracy in non-line-of-sight (NLOS) channel conditions are evaluated on measured data. With the purpose of enhancing the localization coverage in NLOS conditions, non-regenerative relaying techniques for localization are introduced and ad hoc position estimators are devised. An example of context-aware network is given with the study of the UWB-RFID system for detecting and locating semi-passive tags. In particular a deep investigation involving low-complexity receivers capable to deal with problems of multi-tag interference, synchronization mismatches and clock drift is presented. Finally, theoretical bounds on the localization accuracy of this and others passive localization networks (e.g., radar) are derived, also accounting for different configurations such as in monostatic and multistatic networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis addresses the issue of generating texts in the style of an existing author, that also satisfy structural constraints imposed by the genre of the text. Although Markov processes are known to be suitable for representing style, they are difficult to control in order to satisfy non-local properties, such as structural constraints, that require long distance modeling. The framework of Constrained Markov Processes allows to precisely generate texts that are consistent with a corpus, while being controllable in terms of rhymes and meter. Controlled Markov processes consist in reformulating Markov processes in the context of constraint satisfaction. The thesis describes how to represent stylistic and structural properties in terms of constraints in this framework and how this approach can be used for the generation of lyrics in the style of 60 differents authors An evaluation of the desctibed method is provided by comparing it to both pure Markov and pure constraint-based approaches. Finally the thesis describes the implementation of an augmented text editor, called Perec. Perec is intended to improve creativity, by helping the user to write lyrics and poetry, exploiting the techniques presented so far.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MultiProcessor Systems-on-Chip (MPSoC) are the core of nowadays and next generation computing platforms. Their relevance in the global market continuously increase, occupying an important role both in everydaylife products (e.g. smartphones, tablets, laptops, cars) and in strategical market sectors as aviation, defense, robotics, medicine. Despite of the incredible performance improvements in the recent years processors manufacturers have had to deal with issues, commonly called “Walls”, that have hindered the processors development. After the famous “Power Wall”, that limited the maximum frequency of a single core and marked the birth of the modern multiprocessors system-on-chip, the “Thermal Wall” and the “Utilization Wall” are the actual key limiter for performance improvements. The former concerns the damaging effects of the high temperature on the chip caused by the large power densities dissipation, whereas the second refers to the impossibility of fully exploiting the computing power of the processor due to the limitations on power and temperature budgets. In this thesis we faced these challenges by developing efficient and reliable solutions able to maximize performance while limiting the maximum temperature below a fixed critical threshold and saving energy. This has been possible by exploiting the Model Predictive Controller (MPC) paradigm that solves an optimization problem subject to constraints in order to find the optimal control decisions for the future interval. A fully-distributedMPC-based thermal controller with a far lower complexity respect to a centralized one has been developed. The control feasibility and interesting properties for the simplification of the control design has been proved by studying a partial differential equation thermal model. Finally, the controller has been efficiently included in more complex control schemes able to minimize energy consumption and deal with mixed-criticalities tasks

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le Indicazioni Geografiche (IG) giocano un ruolo importante nella crescita economica e nello sviluppo territoriale rurale quando una determinata qualità di prodotto, reputazione o altra caratteristica del prodotto siano attribuibili essenzialmente alla sua origine geografica. In questa ricerca si è verificato la possibilità di valorizzare la regione del Brasile denominata Vale do Paraiba Fluminense, soprannominata “Vale do Café” e di mettere in luce le potenzialità del caffè come prodotto di qualità, sostenibile sotto il profilo ambientale e sociale: un vero e proprio patrimonio culturale che può rivelarsi una valida risorsa economica per il territorio. Nella prima fase dell'indagine è stata realizzata la ricerca a tavolino e sul campo fondata sulle fonti bibliografiche; nella seconda fase è stata applicata la Metodologia Partecipativa della FAO per identificare il collegamento dell’area di origine e del prodotto locale ed il suo potenziale di sviluppo con le risorse locali attraverso questionari on line. Nell’analisi qualitativa sono stati intervistati rappresentanti delle differenti categorie di stakeholder per arricchire il quadro sul contesto storico della regione. Infine, nella parte quantitativa sono stati applicati dei questionari ai consumatori di caffè del territorio. A conclusione della ricerca il territorio potrebbe reintrodurre un caffè storico, simbolo della ricchezza e decadenza di quella regione come elemento di potenziale economico locale, sfruttando la parte immateriale delle aziende agricole storiche, rilocalizzando il prodotto nella memoria locale, riavvicinando la popolazione alla sua storia e principalmente sensibilizzandola del valore del nome geografico “Vale do Paraiba Fluminense” o “Vale do Café” relazionata alla storia della regione, e del prodotto caffè che si propone rilanciare a favore del territorio, rilocalizzando il nome geografico.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Apple proliferation (AP) disease is the most important graft-transmissible and vector-borne disease of apple in Europe. ‘Candidatus Phytoplasma mali’ (Ca. P. mali) is the causal agent of AP. Apple (Malus x domestica) and other Malus species are the only known woody hosts. In European apple orchards, the cultivars are mainly grafted on one rootstock, M. x domestica cv. M9. M9 like all other M. x domestica cultivars is susceptible to ‘Ca. P. mali’. Resistance to AP was found in the wild genotype Malus sieboldii (MS) and in MS-derived hybrids but they were characterised by poor agronomic value. The breeding of a new rootstock carrying the resistant and the agronomic traits was the major aim of a project of which this work is a part. The objective was to shed light into the unknown resistance mechanism. The plant-phytoplasma interaction was studied by analysing differences between the ‘Ca. P. mali’-resistant and -susceptible genotypes related to constitutively expressed genes or to induced genes during infection. The cDNA-Amplified Fragment Length Polymorphism (cDNA-AFLP) technique was employed in both approaches. Differences related to constitutively expressed genes were identified between two ‘Ca. P. mali’-resistant hybrid genotypes (4551 and H0909) and the ‘Ca. P. mali’-susceptible M9. 232 cDNA-AFLP bands present in the two resistant genotypes but absent in the susceptible one were isolated but several different products associated to each band were found. Therefore, two different macroarray hybridisation experiments were performed with the cDNA-AFLP fragments yielding 40 sequences encoding for genes of unknown function or a wide array of functions including plant defence. In the second approach, individuation and analysis of the induced genes was carried out exploiting an in vitro system in which healthy and ‘Ca. P. mali’-infected micropropagated plants were maintained under controlled conditions. Infection trials using in vitro grafting of ‘Ca. P. mali’ showed that the resistance phenotype could be reproduced in this system. In addition, ex vitro plants were generated as an independent control of the genes differentially expressed in the in vitro plants. The cDNA-AFLP analysis in in vitro plants yielded 63 bands characterised by over-expression in the infected state of both the H0909 and MS genotypes. The major part (37 %) of the associated sequences showed homology with products of unknown function. The other genes were involved in plant defence, energy transport/oxidative stress response, protein metabolism and cellular growth. Real-time qPCR analysis was employed to validate the differential expression of the genes individuated in the cDNA-AFLP analysis. Since no internal controls were available for the study of the gene expression in Malus, an analysis on housekeeping genes was performed. The most stably expressed genes were the elongation factor-1 α (EF1) and the eukaryotic translation initiation factor 4-A (eIF4A). Twelve out of 20 genes investigated through qPCR were significantly differentially expressed in at least one genotype either in in vitro plants or in ex vitro plants. Overall, about 20% of the genes confirmed their cDNA-AFLP expression pattern in M. sieboldii or H0909. On the contrary, 30 % of the genes showed down-regulation or were not differentially expressed. For the remaining 50 % of the genes a contrasting behaviour was observed. The qPCR data could be interpreted as follows: the phytoplasma infection unbalance photosynthetic activity and photorespiration down-regulating genes involved in photosynthesis and in the electron transfer chain. As result, and in contrast to M. x domestica genotypes, an up-regulation of genes of the general response against pathogens was found in MS. These genes involved the pathway of H2O2 and the production of secondary metabolites leading to the hypothesis that a response based on the accumulation of H2O2 in MS would be at the base of its resistance. This resembles a phenomenon known as “recovery” where the spontaneous remission of the symptoms is observed in old susceptible plants but occurring in a stochastic way while the resistance in MS is an inducible but stable feature. As additional product of this work three cDNA-AFLP-derived markers were developed which showed independent distribution among the seedlings of two breeding progenies and were associated to a genomic region characteristic of MS. These markers will contribute to the development of molecular markers for the resistance as well as to map the resistance on the Malus genome.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Supernovae are among the most energetic events occurring in the universe and are so far the only verified extrasolar source of neutrinos. As the explosion mechanism is still not well understood, recording a burst of neutrinos from such a stellar explosion would be an important benchmark for particle physics as well as for the core collapse models. The neutrino telescope IceCube is located at the Geographic South Pole and monitors the antarctic glacier for Cherenkov photons. Even though it was conceived for the detection of high energy neutrinos, it is capable of identifying a burst of low energy neutrinos ejected from a supernova in the Milky Way by exploiting the low photomultiplier noise in the antarctic ice and extracting a collective rate increase. A signal Monte Carlo specifically developed for water Cherenkov telescopes is presented. With its help, we will investigate how well IceCube can distinguish between core collapse models and oscillation scenarios. In the second part, nine years of data taken with the IceCube precursor AMANDA will be analyzed. Intensive data cleaning methods will be presented along with a background simulation. From the result, an upper limit on the expected occurrence of supernovae within the Milky Way will be determined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advanced optical biosensor platforms exploiting long range surface plasmons (LRSPs) and responsive N-isopropylacrylamide (NIPAAm) hydrogel binding matrix for the detection of protein and bacterial pathogen analytes were carried out. LRSPs are optical waves that originate from coupling of surface plasmons on the opposite sites of a thin metallic film embedded between two dielectrics with similar refractive indices. LRSPs exhibit orders of magnitude lower damping and more extended profile of field compared to regular surface plasmons (SPs). Their excitation is accompanied with narrow resonance and provides stronger enhancement of electromagnetic field intensity that can advance the sensitivity of surface plasmon resonance (SPR) and surface plasmon-enhanced fluorescence spectroscopy (SPFS) biosensors. Firstly, we investigated thin gold layers deposited on fluoropolymer surface for the excitation of LRSPs. The study indicates that the morphological, optical and electrical properties of gold film can be changed by the surface energy of fluoropolymer and affect the performance of a SPFS biosensor. A photo-crosslinkable NIPAAm hydrogel was grafted to the sensor surface in order to serve as a binding matrix. It was modified with bio-recognition elements (BREs) via amine coupling chemistry and offered the advantage of large binding capacity, stimuli responsive properties and good biocompatibility. Through experimental observations supported by numerical simulations describing diffusion mass transfer and affinity binding of target molecules in the hydrogel, the hydrogel binding matrix thickness, concentration of BREs and the profile of the probing evanescent field was optimized. Hydrogel with a up to micrometer thickness was shown to support additional hydrogel optical waveguide (HOW) mode which was employed for probing affinity binding events in the gel by means of refractometric and fluorescence measurements. These schemes allow to reach limits of detection (LODs) at picomolar and femtomolar levels, respectively. Besides hydrogel based experiments for detection of molecular analytes, long range surface plasmon-enhanced fluorescence spectroscopy (LRSP-FS) was employed for detection of bacterial pathogens. The influence of capture efficiency of bacteria on surfaces and the profile of the probing field on sensor response were investigated. The potential of LRSP-FS with extended evanescent field is demonstrated for detection of pathogenic E. coli O157:H7 on sandwich immunoassays . LOD as low as 6 cfu mL-1 with a detection time of 40 minutes was achieved.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The diameters of traditional dish concentrators can reach several tens of meters, the construction of monolithic mirrors being difficult at these scales: cheap flat reflecting facets mounted on a common frame generally reproduce a paraboloidal surface. When a standard imaging mirror is coupled with a PV dense array, problems arise since the solar image focused is intrinsically circular. Moreover, the corresponding irradiance distribution is bell-shaped in contrast with the requirement of having all the cells under the same illumination. Mismatch losses occur when interconnected cells experience different conditions, in particular in series connections. In this PhD Thesis, we aim at solving these issues by a multidisciplinary approach, exploiting optical concepts and applications developed specifically for astronomical use, where the improvement of the image quality is a very important issue. The strategy we propose is to boost the spot uniformity acting uniquely on the primary reflector and avoiding the big mirrors segmentation into numerous smaller elements that need to be accurately mounted and aligned. In the proposed method, the shape of the mirrors is analytically described by the Zernike polynomials and its optimization is numerically obtained to give a non-imaging optics able to produce a quasi-square spot, spatially uniform and with prescribed concentration level. The freeform primary optics leads to a substantial gain in efficiency without secondary optics. Simple electrical schemes for the receiver are also required. The concept has been investigated theoretically modeling an example of CPV dense array application, including the development of non-optical aspects as the design of the detector and of the supporting mechanics. For the method proposed and the specific CPV system described, a patent application has been filed in Italy with the number TO2014A000016. The patent has been developed thanks to the collaboration between the University of Bologna and INAF (National Institute for Astrophysics).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La rottura del Legamento Crociato Craniale (LCCr) rappresenta una delle patologie ortopediche di maggiore riscontro clinico nella specie canina. In seguito a rottura del LCCr si presenta un continuo slittamento craniale della tibia il quale esita in un processo osteoartrosico. La risoluzione chirurgica rappresenta la migliore soluzione terapeutica. Le tecniche chirurgiche extra-articolari con sfruttamento dei punti isometrici del ginocchio si presentano come delle procedure molto diffuse e utilizzate. Questa tesi propone di validare l’uso di un nuovo sistema di navigazione computerizzato-assistito per la valutazione cinematica durante la ricostruzione del LCCr nel cane, ma soprattutto di studiare e confrontare il comportamento e l’efficacia dopo ricostruzione TightRope (TR) in due diverse coppie di punti isometrici. Abbiamo effettuato due analisi in parallelo. La prima eseguendo interventi chirurgici con tecnica TR su 18 casi clinici e sfruttando il punto isometrico del femore (F2) e due diversi punti isometrici della tibia (T2 o T3). L’analisi prevedeva dei controlli postoperatori a 1, 3 e 6 mesi. Ad ogni controllo veniva effettuata una visita ortopedica, esami radiografici, un questionario di valutazione clinico e di soddisfazione del proprietario. Mentre nella ricerca Ex-Vivo abbiamo eseguito dei test su 14 preparati anatomici con l’utilizzo di un sistema di navigazione computerizzato per la rilevazione dei dati. L’analisi prevedeva la valutazione dell’articolazione in diversi stadi: LCCr intatto; LCCr rotto; dopo ricostruzione con TR in F2-T2 e tensionato a 22N, 44N e 99N; dopo ricostruzione con TR in F2-T3 e tensionato a 22N, 44N e 99N. Ad ogni stadio si eseguivano cinque test di valutazione, tra cui: Test del Cassetto, Test di compressione tibiale (TCT), Rotazione Interna/Esterna, Flesso/Estensione e Varo/Valgo. Lo scopo di tale studio è quello di confrontare tra loro i punti isometrici del ginocchio e di analizzare l’efficacia della tecnica TR nelle due differenti condizioni di isometria (F2-T2 e F2-T3).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis collects the outcomes of a Ph.D. course in Telecommunications engineering and it is focused on enabling techniques for Spread Spectrum (SS) navigation and communication satellite systems. It provides innovations for both interference management and code synchronization techniques. These two aspects are critical for modern navigation and communication systems and constitute the common denominator of the work. The thesis is organized in two parts: the former deals with interference management. We have proposed a novel technique for the enhancement of the sensitivity level of an advanced interference detection and localization system operating in the Global Navigation Satellite System (GNSS) bands, which allows the identification of interfering signals received with power even lower than the GNSS signals. Moreover, we have introduced an effective cancellation technique for signals transmitted by jammers, exploiting their repetitive characteristics, which strongly reduces the interference level at the receiver. The second part, deals with code synchronization. More in detail, we have designed the code synchronization circuit for a Telemetry, Tracking and Control system operating during the Launch and Early Orbit Phase; the proposed solution allows to cope with the very large frequency uncertainty and dynamics characterizing this scenario, and performs the estimation of the code epoch, of the carrier frequency and of the carrier frequency variation rate. Furthermore, considering a generic pair of circuits performing code acquisition, we have proposed a comprehensive framework for the design and the analysis of the optimal cooperation procedure, which minimizes the time required to accomplish synchronization. The study results particularly interesting since it enables the reduction of the code acquisition time without increasing the computational complexity. Finally, considering a network of collaborating navigation receivers, we have proposed an innovative cooperative code acquisition scheme, which allows exploit the shared code epoch information between neighbor nodes, according to the Peer-to-Peer paradigm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lo studio presentato in questa sede concerne applicazioni di saldatura LASER caratterizzate da aspetti di non-convenzionalità ed è costituito da tre filoni principali. Nel primo ambito di intervento è stata valutata la possibilità di effettuare saldature per fusione, con LASER ad emissione continua, su pannelli Aluminum Foam Sandwich e su tubi riempiti in schiuma di alluminio. Lo studio ha messo in evidenza numerose linee operative riguardanti le problematiche relative alla saldatura delle pelli esterne dei componenti ed ha dimostrato la fattibilità relativa ad un approccio di giunzione LASER integrato (saldatura seguita da un post trattamento termico) per la realizzazione della giunzione completa di particolari tubolari riempiti in schiuma con ripristino della struttura cellulare all’interfaccia di giunzione. Il secondo ambito di intervento è caratterizzato dall’applicazione di una sorgente LASER di bassissima potenza, operante in regime ad impulsi corti, nella saldatura di acciaio ad elevato contenuto di carbonio. Lo studio ha messo in evidenza come questo tipo di sorgente, solitamente applicata per lavorazioni di ablazione e marcatura, possa essere applicata anche alla saldatura di spessori sub-millimetrici. In questa fase è stato messo in evidenza il ruolo dei parametri di lavoro sulla conformazione del giunto ed è stata definita l’area di fattibilità del processo. Lo studio è stato completato investigando la possibilità di applicare un trattamento LASER dopo saldatura per addolcire le eventuali zone indurite. In merito all’ultimo ambito di intervento l’attività di studio si è focalizzata sull’utilizzo di sorgenti ad elevata densità di potenza (60 MW/cm^2) nella saldatura a profonda penetrazione di acciai da costruzione. L’attività sperimentale e di analisi dei risultati è stata condotta mediante tecniche di Design of Experiment per la valutazione del ruolo preciso di tutti i parametri di processo e numerose considerazioni relative alla formazione di cricche a caldo sono state suggerite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neurodevelopment of preterm children has become an outcome of major interest since the improvement in survival due to advances in neonatal care. Many studies focused on the relationships among prenatal characteristics and neurodevelopmental outcome in order to identify the higher risk preterms’ subgroups. The aim of this study is to analyze and put in relation growth and development trajectories to investigate their association. 346 children born at the S.Orsola Hospital in Bologna from 01/01/2005 to 30/06/2011 with a birth weight of <1500 grams were followed up in a longitudinal study at different intervals from 3 to 24 months of corrected age. During follow-up visits, preterms’ main biometrical characteristics were measured and the Griffiths Mental Development Scale was administered to assess neurodevelopment. Latent Curve Models were developed to estimate the trajectories of length and of neurodevelopment, both separately and combined in a single model, and to assess the influence of clinical and socio-economic variables. Neurodevelopment trajectory was stepwise declining over time and length trajectory showed a steep increase until 12 months and was flat afterwards. Higher initial values of length were correlated with higher initial values of neurodevelopment and predicted a more declining neurodevelopment. SGA preterms and those from families with higher status had a less declining neurodevelopment slope, while being born from a migrant mother proved negative on neurodevelopment through the mediating effect of a being taller at 3 months. A longer stay in NICU used as a proxy of preterms’ morbidity) was predictive of lower initial neurodevelopment levels. At 24 months, neurodevelopment is more similar among preterms and is more accurately evaluated. The association among preterms’ neurodevelopment and physiological growth may provide further insights on the determinants of preterms’ outcomes. Sound statistical methods, exploiting all the information collected in a longitudinal study, may be more appropriate to the analysis.