14 resultados para CONNECTIVITIES HOC PROCEDURES

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, new advances in the development of spectroscopic based methods for the characterization of heritage materials have been achieved. As concern FTIR spectroscopy new approaches aimed at exploiting near and far IR region for the characterization of inorganic or organic materials have been tested. Paint cross-section have been analysed by FTIR spectroscopy in the NIR range and an “ad hoc” chemometric approach has been developed for the elaboration of hyperspectral maps. Moreover, a new method for the characterization of calcite based on the use of grinding curves has been set up both in MIR and in FAR region. Indeed, calcite is a material widely applied in cultural heritage, and this spectroscopic approach is an efficient and rapid tool to distinguish between different calcite samples. Different enhanced vibrational techniques for the characterisation of dyed fibres have been tested. First a SEIRA (Surface Enhanced Infra-Red Absorption) protocol has been optimised allowing the analysis of colorant micro-extracts thanks to the enhancement produced by the addition of gold nanoparticles. These preliminary studies permitted to identify a new enhanced FTIR method, named ATR/RAIRS, which allowed to reach lower detection limits. Regarding Raman microscopy, the research followed two lines, which have in common the aim of avoiding the use of colloidal solutions. AgI based supports obtained after deposition on a gold-coated glass slides have been developed and tested spotting colorant solutions. A SERS spectrum can be obtained thanks to the photoreduction, which the laser may induce on the silver salt. Moreover, these supports can be used for the TLC separation of a mixture of colorants and the analyses by means of both Raman/SERS and ATR-RAIRS can be successfully reached. Finally, a photoreduction method for the “on fiber” analysis of colorant without the need of any extraction have been optimised.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing diffusion of wireless-enabled portable devices is pushing toward the design of novel service scenarios, promoting temporary and opportunistic interactions in infrastructure-less environments. Mobile Ad Hoc Networks (MANET) are the general model of these higly dynamic networks that can be specialized, depending on application cases, in more specific and refined models such as Vehicular Ad Hoc Networks and Wireless Sensor Networks. Two interesting deployment cases are of increasing relevance: resource diffusion among users equipped with portable devices, such as laptops, smart phones or PDAs in crowded areas (termed dense MANET) and dissemination/indexing of monitoring information collected in Vehicular Sensor Networks. The extreme dynamicity of these scenarios calls for novel distributed protocols and services facilitating application development. To this aim we have designed middleware solutions supporting these challenging tasks. REDMAN manages, retrieves, and disseminates replicas of software resources in dense MANET; it implements novel lightweight protocols to maintain a desired replication degree despite participants mobility, and efficiently perform resource retrieval. REDMAN exploits the high-density assumption to achieve scalability and limited network overhead. Sensed data gathering and distributed indexing in Vehicular Networks raise similar issues: we propose a specific middleware support, called MobEyes, exploiting node mobility to opportunistically diffuse data summaries among neighbor vehicles. MobEyes creates a low-cost opportunistic distributed index to query the distributed storage and to determine the location of needed information. Extensive validation and testing of REDMAN and MobEyes prove the effectiveness of our original solutions in limiting communication overhead while maintaining the required accuracy of replication degree and indexing completeness, and demonstrates the feasibility of the middleware approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Unlike traditional wireless networks, characterized by the presence of last-mile, static and reliable infrastructures, Mobile ad Hoc Networks (MANETs) are dynamically formed by collections of mobile and static terminals that exchange data by enabling each other's communication. Supporting multi-hop communication in a MANET is a challenging research area because it requires cooperation between different protocol layers (MAC, routing, transport). In particular, MAC and routing protocols could be considered mutually cooperative protocol layers. When a route is established, the exposed and hidden terminal problems at MAC layer may decrease the end-to-end performance proportionally with the length of each route. Conversely, the contention at MAC layer may cause a routing protocol to respond by initiating new routes queries and routing table updates. Multi-hop communication may also benefit the presence of pseudo-centralized virtual infrastructures obtained by grouping nodes into clusters. Clustering structures may facilitate the spatial reuse of resources by increasing the system capacity: at the same time, the clustering hierarchy may be used to coordinate transmissions events inside the network and to support intra-cluster routing schemes. Again, MAC and clustering protocols could be considered mutually cooperative protocol layers: the clustering scheme could support MAC layer coordination among nodes, by shifting the distributed MAC paradigm towards a pseudo-centralized MAC paradigm. On the other hand, the system benefits of the clustering scheme could be emphasized by the pseudo-centralized MAC layer with the support for differentiated access priorities and controlled contention. In this thesis, we propose cross-layer solutions involving joint design of MAC, clustering and routing protocols in MANETs. As main contribution, we study and analyze the integration of MAC and clustering schemes to support multi-hop communication in large-scale ad hoc networks. A novel clustering protocol, named Availability Clustering (AC), is defined under general nodes' heterogeneity assumptions in terms of connectivity, available energy and relative mobility. On this basis, we design and analyze a distributed and adaptive MAC protocol, named Differentiated Distributed Coordination Function (DDCF), whose focus is to implement adaptive access differentiation based on the node roles, which have been assigned by the upper-layer's clustering scheme. We extensively simulate the proposed clustering scheme by showing its effectiveness in dominating the network dynamics, under some stressing mobility models and different mobility rates. Based on these results, we propose a possible application of the cross-layer MAC+Clustering scheme to support the fast propagation of alert messages in a vehicular environment. At the same time, we investigate the integration of MAC and routing protocols in large scale multi-hop ad-hoc networks. A novel multipath routing scheme is proposed, by extending the AOMDV protocol with a novel load-balancing approach to concurrently distribute the traffic among the multiple paths. We also study the composition effect of a IEEE 802.11-based enhanced MAC forwarding mechanism called Fast Forward (FF), used to reduce the effects of self-contention among frames at the MAC layer. The protocol framework is modelled and extensively simulated for a large set of metrics and scenarios. For both the schemes, the simulation results reveal the benefits of the cross-layer MAC+routing and MAC+clustering approaches over single-layer solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large scale wireless adhoc networks of computers, sensors, PDAs etc. (i.e. nodes) are revolutionizing connectivity and leading to a paradigm shift from centralized systems to highly distributed and dynamic environments. An example of adhoc networks are sensor networks, which are usually composed by small units able to sense and transmit to a sink elementary data which are successively processed by an external machine. Recent improvements in the memory and computational power of sensors, together with the reduction of energy consumptions, are rapidly changing the potential of such systems, moving the attention towards datacentric sensor networks. A plethora of routing and data management algorithms have been proposed for the network path discovery ranging from broadcasting/floodingbased approaches to those using global positioning systems (GPS). We studied WGrid, a novel decentralized infrastructure that organizes wireless devices in an adhoc manner, where each node has one or more virtual coordinates through which both message routing and data management occur without reliance on either flooding/broadcasting operations or GPS. The resulting adhoc network does not suffer from the deadend problem, which happens in geographicbased routing when a node is unable to locate a neighbor closer to the destination than itself. WGrid allow multidimensional data management capability since nodes' virtual coordinates can act as a distributed database without needing neither special implementation or reorganization. Any kind of data (both single and multidimensional) can be distributed, stored and managed. We will show how a location service can be easily implemented so that any search is reduced to a simple query, like for any other data type. WGrid has then been extended by adopting a replication methodology. We called the resulting algorithm WRGrid. Just like WGrid, WRGrid acts as a distributed database without needing neither special implementation nor reorganization and any kind of data can be distributed, stored and managed. We have evaluated the benefits of replication on data management, finding out, from experimental results, that it can halve the average number of hops in the network. The direct consequence of this fact are a significant improvement on energy consumption and a workload balancing among sensors (number of messages routed by each node). Finally, thanks to the replications, whose number can be arbitrarily chosen, the resulting sensor network can face sensors disconnections/connections, due to failures of sensors, without data loss. Another extension to {WGrid} is {W*Grid} which extends it by strongly improving network recovery performance from link and/or device failures that may happen due to crashes or battery exhaustion of devices or to temporary obstacles. W*Grid guarantees, by construction, at least two disjoint paths between each couple of nodes. This implies that the recovery in W*Grid occurs without broadcasting transmissions and guaranteeing robustness while drastically reducing the energy consumption. An extensive number of simulations shows the efficiency, robustness and traffic road of resulting networks under several scenarios of device density and of number of coordinates. Performance analysis have been compared to existent algorithms in order to validate the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today, third generation networks are consolidated realities, and user expectations on new applications and services are becoming higher and higher. Therefore, new systems and technologies are necessary to move towards the market needs and the user requirements. This has driven the development of fourth generation networks. ”Wireless network for the fourth generation” is the expression used to describe the next step in wireless communications. There is no formal definition for what these fourth generation networks are; however, we can say that the next generation networks will be based on the coexistence of heterogeneous networks, on the integration with the existing radio access network (e.g. GPRS, UMTS, WIFI, ...) and, in particular, on new emerging architectures that are obtaining more and more relevance, as Wireless Ad Hoc and Sensor Networks (WASN). Thanks to their characteristics, fourth generation wireless systems will be able to offer custom-made solutions and applications personalized according to the user requirements; they will offer all types of services at an affordable cost, and solutions characterized by flexibility, scalability and reconfigurability. This PhD’s work has been focused on WASNs, autoconfiguring networks which are not based on a fixed infrastructure, but are characterized by being infrastructure less, where devices have to automatically generate the network in the initial phase, and maintain it through reconfiguration procedures (if nodes’ mobility, or energy drain, etc..., cause disconnections). The main part of the PhD activity has been focused on an analytical study on connectivity models for wireless ad hoc and sensor networks, nevertheless a small part of my work was experimental. Anyway, both the theoretical and experimental activities have had a common aim, related to the performance evaluation of WASNs. Concerning the theoretical analysis, the objective of the connectivity studies has been the evaluation of models for the interference estimation. This is due to the fact that interference is the most important performance degradation cause in WASNs. As a consequence, is very important to find an accurate model that allows its investigation, and I’ve tried to obtain a model the most realistic and general as possible, in particular for the evaluation of the interference coming from bounded interfering areas (i.e. a WiFi hot spot, a wireless covered research laboratory, ...). On the other hand, the experimental activity has led to Throughput and Packet Error Rare measurements on a real IEEE802.15.4 Wireless Sensor Network.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction The “eversion” technique for carotid endarterectomy (e-CEA), that involves the transection of the internal carotid artery at the carotid bulb and its eversion over the atherosclerotic plaque, has been associated with an increased risk of postoperative hypertension possibly due to a direct iatrogenic damage to the carotid sinus fibers. The aim of this study is to assess the long-term effect of the e-CEA on arterial baroreflex and peripheral chemoreflex function in humans. Methods A retrospective review was conducted on a prospectively compiled computerized database of 3128 CEAs performed on 2617 patients at our Center between January 2001 and March 2006. During this period, a total of 292 patients who had bilateral carotid stenosis ≥70% at the time of the first admission underwent staged bilateral CEAs. Of these, 93 patients had staged bilateral e-CEAs, 126 staged bilateral s- CEAs and 73 had different procedures on each carotid. CEAs were performed with either the eversion or the standard technique with routine Dacron patching in all cases. The study inclusion criteria were bilateral CEA with the same technique on both sides and an uneventful postoperative course after both procedures. We decided to enroll patients submitted to bilateral e-CEA to eliminate the background noise from contralateral carotid sinus fibers. Exclusion criteria were: age >70 years, diabetes mellitus, chronic pulmonary disease, symptomatic ischemic cardiac disease or medical therapy with b-blockers, cardiac arrhythmia, permanent neurologic deficits or an abnormal preoperative cerebral CT scan, carotid restenosis and previous neck or chest surgery or irradiation. Young and aged-matched healthy subjects were also recruited as controls. Patients were assessed by the 4 standard cardiovascular reflex tests, including Lying-to-standing, Orthostatic hypotension, Deep breathing, and Valsalva Maneuver. Indirect autonomic parameters were assessed with a non-invasive approach based on spectral analysis of EKG RR interval, systolic arterial pressure, and respiration variability, performed with an ad hoc software. From the analysis of these parameters the software provides the estimates of spontaneous baroreflex sensitivity (BRS). The ventilatory response to hypoxia was assessed in patients and controls by means of classic rebreathing tests. Results A total of 29 patients (16 males, age 62.4±8.0 years) were enrolled. Overall, 13 patients had undergone bilateral e-CEA (44.8%) and 16 bilateral s-CEA (55.2%) with a mean interval between the procedures of 62±56 days. No patient showed signs or symptoms of autonomic dysfunction, including labile hypertension, tachycardia, palpitations, headache, inappropriate diaphoresis, pallor or flushing. The results of standard cardiovascular autonomic tests showed no evidence of autonomic dysfunction in any of the enrolled patients. At spectral analysis, a residual baroreflex performance was shown in both patient groups, though reduced, as expected, compared to young controls. Notably, baroreflex function was better maintained in e-CEA, compared to standard CEA. (BRS at rest: young controls 19.93 ± 2.45 msec/mmHg; age-matched controls 7.75 ± 1.24; e-CEA 13.85 ± 5.14; s-CEA 4.93 ± 1.15; ANOVA P=0.001; BRS at stand: young controls 7.83 ± 0.66; age-matched controls 3.71 ± 0.35; e-CEA 7.04 ± 1.99; s-CEA 3.57 ± 1.20; ANOVA P=0.001). In all subjects ventilation (VÝ E) and oximetry data fitted a linear regression model with r values > 0.8. Oneway analysis of variance showed a significantly higher slope both for ΔVE/ΔSaO2 in controls compared with both patient groups which were not different from each other (-1.37 ± 0.33 compared with -0.33±0.08 and -0.29 ±0.13 l/min/%SaO2, p<0.05, Fig.). Similar results were observed for and ΔVE/ΔPetO2 (-0.20 ± 0.1 versus -0.01 ± 0.0 and -0.07 ± 0.02 l/min/mmHg, p<0.05). A regression model using treatment, age, baseline FiCO2 and minimum SaO2 achieved showed only treatment as a significant factor in explaining the variance in minute ventilation (R2= 25%). Conclusions Overall, we demonstrated that bilateral e-CEA does not imply a carotid sinus denervation. As a result of some expected degree of iatrogenic damage, such performance was lower than that of controls. Interestingly though, baroreflex performance appeared better maintained in e-CEA than in s-CEA. This may be related to the changes in the elastic properties of the carotid sinus vascular wall, as the patch is more rigid than the endarterectomized carotid wall that remains in the e-CEA. These data confirmed the safety of CEA irrespective of the surgical technique and have relevant clinical implication in the assessment of the frequent hemodynamic disturbances associated with carotid angioplasty stenting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the birth of the European Union on 1957, the development of a single market through the integration of national freight transport networks has been one of the most important points in the European Union agenda. Increasingly congested motorways, rising oil prices and concerns about environment and climate change require the optimization of transport systems and transport processes. The best solution should be the intermodal transport, in which the most efficient transport options are used for the different legs of transport. This thesis examines the problem of defining innovative strategies and procedures for the sustainable development of intermodal freight transport in Europe. In particular, the role of maritime transport and railway transport in the intermodal chain are examined in depth, as these modes are recognized to be environmentally friendly and energy efficient. Maritime transport is the only mode that has kept pace with the fast growth in road transport, but it is necessary to promote the full exploitation of it by involving short sea shipping as an integrated service in the intermodal door-to-door supply chain and by improving port accessibility. The role of Motorways of the Sea services as part of the Trans-European Transport Network is is taken into account: a picture of the European policy and a state of the art of the Italian Motorways of the Sea system are reported. Afterwards, the focus shifts from line to node problems: the role of intermodal railway terminals in the transport chain is discussed. In particular, the last mile process is taken into account, as it is crucial in order to exploit the full capacity of an intermodal terminal. The difference between the present last mile planning models of Bologna Interporto and Verona Quadrante Europa is described and discussed. Finally, a new approach to railway intermodal terminal planning and management is introduced, by describing the case of "Terminal Gate" at Verona Quadrante Europa. Some proposals to favour the integrate management of "Terminal Gate" and the allocation of its capacity are drawn up.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The construction and use of multimedia corpora has been advocated for a while in the literature as one of the expected future application fields of Corpus Linguistics. This research project represents a pioneering experience aimed at applying a data-driven methodology to the study of the field of AVT, similarly to what has been done in the last few decades in the macro-field of Translation Studies. This research was based on the experience of Forlixt 1, the Forlì Corpus of Screen Translation, developed at the University of Bologna’s Department of Interdisciplinary Studies in Translation, Languages and Culture. As a matter of fact, in order to quantify strategies of linguistic transfer of an AV product, we need to take into consideration not only the linguistic aspect of such a product but all the meaning-making resources deployed in the filmic text. Provided that one major benefit of Forlixt 1 is the combination of audiovisual and textual data, this corpus allows the user to access primary data for scientific investigation, and thus no longer rely on pre-processed material such as traditional annotated transcriptions. Based on this rationale, the first chapter of the thesis sets out to illustrate the state of the art of research in the disciplinary fields involved. The primary objective was to underline the main repercussions on multimedia texts resulting from the interaction of a double support, audio and video, and, accordingly, on procedures, means, and methods adopted in their translation. By drawing on previous research in semiotics and film studies, the relevant codes at work in visual and acoustic channels were outlined. Subsequently, we concentrated on the analysis of the verbal component and on the peculiar characteristics of filmic orality as opposed to spontaneous dialogic production. In the second part, an overview of the main AVT modalities was presented (dubbing, voice-over, interlinguistic and intra-linguistic subtitling, audio-description, etc.) in order to define the different technologies, processes and professional qualifications that this umbrella term presently includes. The second chapter focuses diachronically on various theories’ contribution to the application of Corpus Linguistics’ methods and tools to the field of Translation Studies (i.e. Descriptive Translation Studies, Polysystem Theory). In particular, we discussed how the use of corpora can favourably help reduce the gap existing between qualitative and quantitative approaches. Subsequently, we reviewed the tools traditionally employed by Corpus Linguistics in regard to the construction of traditional “written language” corpora, to assess whether and how they can be adapted to meet the needs of multimedia corpora. In particular, we reviewed existing speech and spoken corpora, as well as multimedia corpora specifically designed to investigate Translation. The third chapter reviews Forlixt 1's main developing steps, from a technical (IT design principles, data query functions) and methodological point of view, by laying down extensive scientific foundations for the annotation methods adopted, which presently encompass categories of pragmatic, sociolinguistic, linguacultural and semiotic nature. Finally, we described the main query tools (free search, guided search, advanced search and combined search) and the main intended uses of the database in a pedagogical perspective. The fourth chapter lists specific compilation criteria retained, as well as statistics of the two sub-corpora, by presenting data broken down by language pair (French-Italian and German-Italian) and genre (cinema’s comedies, television’s soapoperas and crime series). Next, we concentrated on the discussion of the results obtained from the analysis of summary tables reporting the frequency of categories applied to the French-Italian sub-corpus. The detailed observation of the distribution of categories identified in the original and dubbed corpus allowed us to empirically confirm some of the theories put forward in the literature and notably concerning the nature of the filmic text, the dubbing process and Italian dubbed language’s features. This was possible by looking into some of the most problematic aspects, like the rendering of socio-linguistic variation. The corpus equally allowed us to consider so far neglected aspects, such as pragmatic, prosodic, kinetic, facial, and semiotic elements, and their combination. At the end of this first exploration, some specific observations concerning possible macrotranslation trends were made for each type of sub-genre considered (cinematic and TV genre). On the grounds of this first quantitative investigation, the fifth chapter intended to further examine data, by applying ad hoc models of analysis. Given the virtually infinite number of combinations of categories adopted, and of the latter with searchable textual units, three possible qualitative and quantitative methods were designed, each of which was to concentrate on a particular translation dimension of the filmic text. The first one was the cultural dimension, which specifically focused on the rendering of selected cultural references and on the investigation of recurrent translation choices and strategies justified on the basis of the occurrence of specific clusters of categories. The second analysis was conducted on the linguistic dimension by exploring the occurrence of phrasal verbs in the Italian dubbed corpus and by ascertaining the influence on the adoption of related translation strategies of possible semiotic traits, such as gestures and facial expressions. Finally, the main aim of the third study was to verify whether, under which circumstances, and through which modality, graphic and iconic elements were translated into Italian from an original corpus of both German and French films. After having reviewed the main translation techniques at work, an exhaustive account of possible causes for their non-translation was equally provided. By way of conclusion, the discussion of results obtained from the distribution of annotation categories on the French-Italian corpus, as well as the application of specific models of analysis allowed us to underline possible advantages and drawbacks related to the adoption of a corpus-based approach to AVT studies. Even though possible updating and improvement were proposed in order to help solve some of the problems identified, it is argued that the added value of Forlixt 1 lies ultimately in having created a valuable instrument, allowing to carry out empirically-sound contrastive studies that may be usefully replicated on different language pairs and several types of multimedia texts. Furthermore, multimedia corpora can also play a crucial role in L2 and translation teaching, two disciplines in which their use still lacks systematic investigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The treatment of the Cerebral Palsy (CP) is considered as the “core problem” for the whole field of the pediatric rehabilitation. The reason why this pathology has such a primary role, can be ascribed to two main aspects. First of all CP is the form of disability most frequent in childhood (one new case per 500 birth alive, (1)), secondarily the functional recovery of the “spastic” child is, historically, the clinical field in which the majority of the therapeutic methods and techniques (physiotherapy, orthotic, pharmacologic, orthopedic-surgical, neurosurgical) were first applied and tested. The currently accepted definition of CP – Group of disorders of the development of movement and posture causing activity limitation (2) – is the result of a recent update by the World Health Organization to the language of the International Classification of Functioning Disability and Health, from the original proposal of Ingram – A persistent but not unchangeable disorder of posture and movement – dated 1955 (3). This definition considers CP as a permanent ailment, i.e. a “fixed” condition, that however can be modified both functionally and structurally by means of child spontaneous evolution and treatments carried out during childhood. The lesion that causes the palsy, happens in a structurally immature brain in the pre-, peri- or post-birth period (but only during the firsts months of life). The most frequent causes of CP are: prematurity, insufficient cerebral perfusion, arterial haemorrhage, venous infarction, hypoxia caused by various origin (for example from the ingestion of amniotic liquid), malnutrition, infection and maternal or fetal poisoning. In addition to these causes, traumas and malformations have to be included. The lesion, whether focused or spread over the nervous system, impairs the whole functioning of the Central Nervous System (CNS). As a consequence, they affect the construction of the adaptive functions (4), first of all posture control, locomotion and manipulation. The palsy itself does not vary over time, however it assumes an unavoidable “evolutionary” feature when during growth the child is requested to meet new and different needs through the construction of new and different functions. It is essential to consider that clinically CP is not only a direct expression of structural impairment, that is of etiology, pathogenesis and lesion timing, but it is mainly the manifestation of the path followed by the CNS to “re”-construct the adaptive functions “despite” the presence of the damage. “Palsy” is “the form of the function that is implemented by an individual whose CNS has been damaged in order to satisfy the demands coming from the environment” (4). Therefore it is only possible to establish general relations between lesion site, nature and size, and palsy and recovery processes. It is quite common to observe that children with very similar neuroimaging can have very different clinical manifestations of CP and, on the other hand, children with very similar motor behaviors can have completely different lesion histories. A very clear example of this is represented by hemiplegic forms, which show bilateral hemispheric lesions in a high percentage of cases. The first section of this thesis is aimed at guiding the interpretation of CP. First of all the issue of the detection of the palsy is treated from historical viewpoint. Consequently, an extended analysis of the current definition of CP, as internationally accepted, is provided. The definition is then outlined in terms of a space dimension and then of a time dimension, hence it is highlighted where this definition is unacceptably lacking. The last part of the first section further stresses the importance of shifting from the traditional concept of CP as a palsy of development (defect analysis) towards the notion of development of palsy, i.e., as the product of the relationship that the individual however tries to dynamically build with the surrounding environment (resource semeiotics) starting and growing from a different availability of resources, needs, dreams, rights and duties (4). In the scientific and clinic community no common classification system of CP has so far been universally accepted. Besides, no standard operative method or technique have been acknowledged to effectively assess the different disabilities and impairments exhibited by children with CP. CP is still “an artificial concept, comprising several causes and clinical syndromes that have been grouped together for a convenience of management” (5). The lack of standard and common protocols able to effectively diagnose the palsy, and as a consequence to establish specific treatments and prognosis, is mainly because of the difficulty to elevate this field to a level based on scientific evidence. A solution aimed at overcoming the current incomplete treatment of CP children is represented by the clinical systematic adoption of objective tools able to measure motor defects and movement impairments. A widespread application of reliable instruments and techniques able to objectively evaluate both the form of the palsy (diagnosis) and the efficacy of the treatments provided (prognosis), constitutes a valuable method able to validate care protocols, establish the efficacy of classification systems and assess the validity of definitions. Since the ‘80s, instruments specifically oriented to the analysis of the human movement have been advantageously designed and applied in the context of CP with the aim of measuring motor deficits and, especially, gait deviations. The gait analysis (GA) technique has been increasingly used over the years to assess, analyze, classify, and support the process of clinical decisions making, allowing for a complete investigation of gait with an increased temporal and spatial resolution. GA has provided a basis for improving the outcome of surgical and nonsurgical treatments and for introducing a new modus operandi in the identification of defects and functional adaptations to the musculoskeletal disorders. Historically, the first laboratories set up for gait analysis developed their own protocol (set of procedures for data collection and for data reduction) independently, according to performances of the technologies available at that time. In particular, the stereophotogrammetric systems mainly based on optoelectronic technology, soon became a gold-standard for motion analysis. They have been successfully applied especially for scientific purposes. Nowadays the optoelectronic systems have significantly improved their performances in term of spatial and temporal resolution, however many laboratories continue to use the protocols designed on the technology available in the ‘70s and now out-of-date. Furthermore, these protocols are not coherent both for the biomechanical models and for the adopted collection procedures. In spite of these differences, GA data are shared, exchanged and interpreted irrespectively to the adopted protocol without a full awareness to what extent these protocols are compatible and comparable with each other. Following the extraordinary advances in computer science and electronics, new systems for GA no longer based on optoelectronic technology, are now becoming available. They are the Inertial and Magnetic Measurement Systems (IMMSs), based on miniature MEMS (Microelectromechanical systems) inertial sensor technology. These systems are cost effective, wearable and fully portable motion analysis systems, these features gives IMMSs the potential to be used both outside specialized laboratories and to consecutive collect series of tens of gait cycles. The recognition and selection of the most representative gait cycle is then easier and more reliable especially in CP children, considering their relevant gait cycle variability. The second section of this thesis is focused on GA. In particular, it is firstly aimed at examining the differences among five most representative GA protocols in order to assess the state of the art with respect to the inter-protocol variability. The design of a new protocol is then proposed and presented with the aim of achieving gait analysis on CP children by means of IMMS. The protocol, named ‘Outwalk’, contains original and innovative solutions oriented at obtaining joint kinematic with calibration procedures extremely comfortable for the patients. The results of a first in-vivo validation of Outwalk on healthy subjects are then provided. In particular, this study was carried out by comparing Outwalk used in combination with an IMMS with respect to a reference protocol and an optoelectronic system. In order to set a more accurate and precise comparison of the systems and the protocols, ad hoc methods were designed and an original formulation of the statistical parameter coefficient of multiple correlation was developed and effectively applied. On the basis of the experimental design proposed for the validation on healthy subjects, a first assessment of Outwalk, together with an IMMS, was also carried out on CP children. The third section of this thesis is dedicated to the treatment of walking in CP children. Commonly prescribed treatments in addressing gait abnormalities in CP children include physical therapy, surgery (orthopedic and rhizotomy), and orthoses. The orthotic approach is conservative, being reversible, and widespread in many therapeutic regimes. Orthoses are used to improve the gait of children with CP, by preventing deformities, controlling joint position, and offering an effective lever for the ankle joint. Orthoses are prescribed for the additional aims of increasing walking speed, improving stability, preventing stumbling, and decreasing muscular fatigue. The ankle-foot orthosis (AFO), with a rigid ankle, are primarily designed to prevent equinus and other foot deformities with a positive effect also on more proximal joints. However, AFOs prevent the natural excursion of the tibio-tarsic joint during the second rocker, hence hampering the natural leaning progression of the whole body under the effect of the inertia (6). A new modular (submalleolar) astragalus-calcanear orthosis, named OMAC, has recently been proposed with the intention of substituting the prescription of AFOs in those CP children exhibiting a flat and valgus-pronated foot. The aim of this section is thus to present the mechanical and technical features of the OMAC by means of an accurate description of the device. In particular, the integral document of the deposited Italian patent, is provided. A preliminary validation of OMAC with respect to AFO is also reported as resulted from an experimental campaign on diplegic CP children, during a three month period, aimed at quantitatively assessing the benefit provided by the two orthoses on walking and at qualitatively evaluating the changes in the quality of life and motor abilities. As already stated, CP is universally considered as a persistent but not unchangeable disorder of posture and movement. Conversely to this definition, some clinicians (4) have recently pointed out that movement disorders may be primarily caused by the presence of perceptive disorders, where perception is not merely the acquisition of sensory information, but an active process aimed at guiding the execution of movements through the integration of sensory information properly representing the state of one’s body and of the environment. Children with perceptive impairments show an overall fear of moving and the onset of strongly unnatural walking schemes directly caused by the presence of perceptive system disorders. The fourth section of the thesis thus deals with accurately defining the perceptive impairment exhibited by diplegic CP children. A detailed description of the clinical signs revealing the presence of the perceptive impairment, and a classification scheme of the clinical aspects of perceptual disorders is provided. In the end, a functional reaching test is proposed as an instrumental test able to disclosure the perceptive impairment. References 1. Prevalence and characteristics of children with cerebral palsy in Europe. Dev Med Child Neurol. 2002 Set;44(9):633-640. 2. Bax M, Goldstein M, Rosenbaum P, Leviton A, Paneth N, Dan B, et al. Proposed definition and classification of cerebral palsy, April 2005. Dev Med Child Neurol. 2005 Ago;47(8):571-576. 3. Ingram TT. A study of cerebral palsy in the childhood population of Edinburgh. Arch. Dis. Child. 1955 Apr;30(150):85-98. 4. Ferrari A, Cioni G. The spastic forms of cerebral palsy : a guide to the assessment of adaptive functions. Milan: Springer; 2009. 5. Olney SJ, Wright MJ. Cerebral Palsy. Campbell S et al. Physical Therapy for Children. 2nd Ed. Philadelphia: Saunders. 2000;:533-570. 6. Desloovere K, Molenaers G, Van Gestel L, Huenaerts C, Van Campenhout A, Callewaert B, et al. How can push-off be preserved during use of an ankle foot orthosis in children with hemiplegia? A prospective controlled study. Gait Posture. 2006 Ott;24(2):142-151.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La realizzazione di questa ricerca ha come obiettivo principe lo studio approfondito dell’istituto della riabilitazione penale all’interno del panorama legislativo italiano, con riferimento al contesto bolognese, e nella legislazione belga; inoltre si pone come scopo l’analisi dell’interazione autore-vittima del reato, con una particolare attenzione al risarcimento elargito alla persona offesa dal reato e alla figura della vittima prima nel sistema penale, poi nel procedimento specifico che porta alla riabilitazione del condannato. Il punto di partenza del lavoro di ricerca intrapreso è costituito da un’accurata ricerca bibliografica inerente agli argomenti trattati, al fine di poter approfondire una buona parte della letteratura italiana e belga esistente in materia. La fase successiva della ricerca è stata quella di reperire informazioni riguardanti l’ambito di studio da approfondire, cioè la riabilitazione, secondo una direttrice empirica. Pertanto, per quanto concerne la realtà italiana, sono stati analizzati, tramite una griglia di rilevazione costruita ad hoc, i fascicoli processuali relativi alla riabilitazione presenti negli archivi del Tribunale di Sorveglianza di Bologna (2004-2009); la situazione belga è invece stata studiata reperendo dati, riferiti alla réhabilitation pénal, rintracciati presso il “Service Public Fédéral Justice - Bureau Permanent Statistiques et Mesure de la charge de travail (BPSM)” (2008-2009), sia livello nazionale che delle cinque Corti di appello. Inoltre, al fine di ottenere un ulteriore punto di vista empirico riguardante l’istituto della riabilitazione penale, sono state effettuate delle interviste semi-strutturate al Presidente del Tribunale di Sorveglianza Dott. Francesco Maisto e al Sostituto Procuratore Generale di Liège Mr. Nicolas Banneux. Infatti l’esperienza lavorativa e il particolare ruolo ricoperto da questi “osservatori privilegiati”, competenti di riabilitazione e particolarmente sensibili alle tematiche criminologiche e vittimologiche, li pone direttamente in contatto con l’istituto e la procedura della riabilitazione, determinando in loro una profonda padronanza dell’oggetto di ricerca.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nei processi di progettazione e produzione tramite tecnologie di colata di componenti in alluminio ad elevate prestazioni, risulta fondamentale poter prevedere la presenza e la quantità di difetti correlabili a design non corretti e a determinate condizioni di processo. Fra le difettologie più comuni di un getto in alluminio, le porosità con dimensioni di decine o centinaia di m, note come microporosità, hanno un impatto estremamente negativo sulle caratteristiche meccaniche, sia statiche che a fatica. In questo lavoro, dopo un’adeguata analisi bibliografica, sono state progettate e messe a punto attrezzature e procedure sperimentali che permettessero la produzione di materiale a difettologia e microstruttura differenziata, a partire da condizioni di processo note ed accuratamente misurabili, che riproducessero la variabilità delle stesse nell’ambito della reale produzione di componenti fusi. Tutte le attività di progettazione delle sperimentazioni, sono state coadiuvate dall’ausilio di software di simulazione del processo fusorio che hanno a loro volta beneficiato di tarature e validazioni sperimentali ad hoc. L’apparato sperimentale ha dimostrato la propria efficacia nella produzione di materiale a microstruttura e difettologia differenziata, in maniera robusta e ripetibile. Utilizzando i risultati sperimentali ottenuti, si è svolta la validazione di un modello numerico di previsione delle porosità da ritiro e gas, ritenuto ad oggi allo stato dell’arte e già implementato in alcuni codici commerciali di simulazione del processo fusorio. I risultati numerici e sperimentali, una volta comparati, hanno evidenziato una buona accuratezza del modello numerico nella previsione delle difettologie sia in termini di ordini di grandezza che di gradienti della porosità nei getti realizzati.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main aim of my PhD project was the design and the synthesis of new pyrrolidine organocatalysts. New effective ferrocenyl pyrrolidine catalysts, active in benchmark organocatalytic reactions, has been developed. The ferrocenyl moiety, in combination with simple ethyl chains, is capable of fixing the enamine conformation addressing the approach trajectory of the nucleophile in the reaction. The results obtained represent an interesting proof-of-concept, showing for the first time the remarkable effectiveness of the ferrocenyl moiety in providing enantioselectivity through conformational selection. This approach could be viably employed in the rational design of ligands for metal or organocatalysts. Other hindered secondary amines has been prepared from alkylation of acyclic chiral nitroderivatives with alcohols in a highly diastereoselective fashion, giving access to functionalized, useful organocatalytic chiral pyrrolidines. A family of new pyrrolidines bearing sterogenic centers and functional groups can be readily accessible by this methodology. The second purpose of the project was to study in deep the reactivity of stabilized carbocations in new metal-free and organocatalytic reactions. By taking advantage of the results from the kinetic studies described by Mayr, a simple and effective procedure for the direct formylation of aryltetrafluoroborate salts, has been development. The coupling of a range of aryl- and heteroaryl- trifluoroborate salts with 1,3-benzodithiolylium tetrafluoroborate, has been attempted in moderate to good yields. Finally, a simple and general methodology for the enamine-mediated enantioselective α-alkylation of α-substituted aldehydes with 1,3-benzodithiolylium tetrafluoroborate has been reported. The introduction of the benzodithiole moiety permit the installation of different functional groups due to its chameleonic behaviour.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the research activity focused on the investigation of the correlation between the degree of purity in terms of chemical dopants in organic small molecule semiconductors and their electrical and optoelectronic performances once introduced as active material in devices. The first step of the work was addressed to the study of the electrical performances variation of two commercial organic semiconductors after being processed by means of thermal sublimation process. In particular, the p-type 2,2′′′-Dihexyl-2,2′:5′,2′′:5′′,2′′′-quaterthiophene (DH4T) semiconductor and the n-type 2,2′′′- Perfluoro-Dihexyl-2,2′:5′,2′′:5′′,2′′′-quaterthiophene (DFH4T) semiconductor underwent several sublimation cycles, with consequent improvement of the electrical performances in terms of charge mobility and threshold voltage, highlighting the benefits brought by this treatment to the electric properties of the discussed semiconductors in OFET devices by the removal of residual impurities. The second step consisted in the provision of a metal-free synthesis of DH4T, which was successfully prepared without organometallic reagents or catalysts in collaboration with Dr. Manuela Melucci from ISOF-CNR Institute in Bologna. Indeed the experimental work demonstrated that those compounds are responsible for the electrical degradation by intentionally doping the semiconductor obtained by metal-free method by Tetrakis(triphenylphosphine)palladium(0) (Pd(PPh3)4) and Tributyltin chloride (Bu3SnCl), as well as with an organic impurity, like 5-hexyl-2,2':5',2''-terthiophene (HexT3) at, in different concentrations (1, 5 and 10% w/w). After completing the entire evaluation process loop, from fabricating OFET devices by vacuum sublimation with implemented intentionally-doped batches to the final electrical characterization in inherent-atmosphere conditions, commercial DH4T, metal-free DH4T and the intentionally-doped DH4T were systematically compared. Indeed, the fabrication of OFET based on doped DH4T clearly pointed out that the vacuum sublimation is still an inherent and efficient purification method for crude semiconductors, but also a reliable way to fabricate high performing devices.