953 resultados para test-process features


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The treatment of the Cerebral Palsy (CP) is considered as the “core problem” for the whole field of the pediatric rehabilitation. The reason why this pathology has such a primary role, can be ascribed to two main aspects. First of all CP is the form of disability most frequent in childhood (one new case per 500 birth alive, (1)), secondarily the functional recovery of the “spastic” child is, historically, the clinical field in which the majority of the therapeutic methods and techniques (physiotherapy, orthotic, pharmacologic, orthopedic-surgical, neurosurgical) were first applied and tested. The currently accepted definition of CP – Group of disorders of the development of movement and posture causing activity limitation (2) – is the result of a recent update by the World Health Organization to the language of the International Classification of Functioning Disability and Health, from the original proposal of Ingram – A persistent but not unchangeable disorder of posture and movement – dated 1955 (3). This definition considers CP as a permanent ailment, i.e. a “fixed” condition, that however can be modified both functionally and structurally by means of child spontaneous evolution and treatments carried out during childhood. The lesion that causes the palsy, happens in a structurally immature brain in the pre-, peri- or post-birth period (but only during the firsts months of life). The most frequent causes of CP are: prematurity, insufficient cerebral perfusion, arterial haemorrhage, venous infarction, hypoxia caused by various origin (for example from the ingestion of amniotic liquid), malnutrition, infection and maternal or fetal poisoning. In addition to these causes, traumas and malformations have to be included. The lesion, whether focused or spread over the nervous system, impairs the whole functioning of the Central Nervous System (CNS). As a consequence, they affect the construction of the adaptive functions (4), first of all posture control, locomotion and manipulation. The palsy itself does not vary over time, however it assumes an unavoidable “evolutionary” feature when during growth the child is requested to meet new and different needs through the construction of new and different functions. It is essential to consider that clinically CP is not only a direct expression of structural impairment, that is of etiology, pathogenesis and lesion timing, but it is mainly the manifestation of the path followed by the CNS to “re”-construct the adaptive functions “despite” the presence of the damage. “Palsy” is “the form of the function that is implemented by an individual whose CNS has been damaged in order to satisfy the demands coming from the environment” (4). Therefore it is only possible to establish general relations between lesion site, nature and size, and palsy and recovery processes. It is quite common to observe that children with very similar neuroimaging can have very different clinical manifestations of CP and, on the other hand, children with very similar motor behaviors can have completely different lesion histories. A very clear example of this is represented by hemiplegic forms, which show bilateral hemispheric lesions in a high percentage of cases. The first section of this thesis is aimed at guiding the interpretation of CP. First of all the issue of the detection of the palsy is treated from historical viewpoint. Consequently, an extended analysis of the current definition of CP, as internationally accepted, is provided. The definition is then outlined in terms of a space dimension and then of a time dimension, hence it is highlighted where this definition is unacceptably lacking. The last part of the first section further stresses the importance of shifting from the traditional concept of CP as a palsy of development (defect analysis) towards the notion of development of palsy, i.e., as the product of the relationship that the individual however tries to dynamically build with the surrounding environment (resource semeiotics) starting and growing from a different availability of resources, needs, dreams, rights and duties (4). In the scientific and clinic community no common classification system of CP has so far been universally accepted. Besides, no standard operative method or technique have been acknowledged to effectively assess the different disabilities and impairments exhibited by children with CP. CP is still “an artificial concept, comprising several causes and clinical syndromes that have been grouped together for a convenience of management” (5). The lack of standard and common protocols able to effectively diagnose the palsy, and as a consequence to establish specific treatments and prognosis, is mainly because of the difficulty to elevate this field to a level based on scientific evidence. A solution aimed at overcoming the current incomplete treatment of CP children is represented by the clinical systematic adoption of objective tools able to measure motor defects and movement impairments. A widespread application of reliable instruments and techniques able to objectively evaluate both the form of the palsy (diagnosis) and the efficacy of the treatments provided (prognosis), constitutes a valuable method able to validate care protocols, establish the efficacy of classification systems and assess the validity of definitions. Since the ‘80s, instruments specifically oriented to the analysis of the human movement have been advantageously designed and applied in the context of CP with the aim of measuring motor deficits and, especially, gait deviations. The gait analysis (GA) technique has been increasingly used over the years to assess, analyze, classify, and support the process of clinical decisions making, allowing for a complete investigation of gait with an increased temporal and spatial resolution. GA has provided a basis for improving the outcome of surgical and nonsurgical treatments and for introducing a new modus operandi in the identification of defects and functional adaptations to the musculoskeletal disorders. Historically, the first laboratories set up for gait analysis developed their own protocol (set of procedures for data collection and for data reduction) independently, according to performances of the technologies available at that time. In particular, the stereophotogrammetric systems mainly based on optoelectronic technology, soon became a gold-standard for motion analysis. They have been successfully applied especially for scientific purposes. Nowadays the optoelectronic systems have significantly improved their performances in term of spatial and temporal resolution, however many laboratories continue to use the protocols designed on the technology available in the ‘70s and now out-of-date. Furthermore, these protocols are not coherent both for the biomechanical models and for the adopted collection procedures. In spite of these differences, GA data are shared, exchanged and interpreted irrespectively to the adopted protocol without a full awareness to what extent these protocols are compatible and comparable with each other. Following the extraordinary advances in computer science and electronics, new systems for GA no longer based on optoelectronic technology, are now becoming available. They are the Inertial and Magnetic Measurement Systems (IMMSs), based on miniature MEMS (Microelectromechanical systems) inertial sensor technology. These systems are cost effective, wearable and fully portable motion analysis systems, these features gives IMMSs the potential to be used both outside specialized laboratories and to consecutive collect series of tens of gait cycles. The recognition and selection of the most representative gait cycle is then easier and more reliable especially in CP children, considering their relevant gait cycle variability. The second section of this thesis is focused on GA. In particular, it is firstly aimed at examining the differences among five most representative GA protocols in order to assess the state of the art with respect to the inter-protocol variability. The design of a new protocol is then proposed and presented with the aim of achieving gait analysis on CP children by means of IMMS. The protocol, named ‘Outwalk’, contains original and innovative solutions oriented at obtaining joint kinematic with calibration procedures extremely comfortable for the patients. The results of a first in-vivo validation of Outwalk on healthy subjects are then provided. In particular, this study was carried out by comparing Outwalk used in combination with an IMMS with respect to a reference protocol and an optoelectronic system. In order to set a more accurate and precise comparison of the systems and the protocols, ad hoc methods were designed and an original formulation of the statistical parameter coefficient of multiple correlation was developed and effectively applied. On the basis of the experimental design proposed for the validation on healthy subjects, a first assessment of Outwalk, together with an IMMS, was also carried out on CP children. The third section of this thesis is dedicated to the treatment of walking in CP children. Commonly prescribed treatments in addressing gait abnormalities in CP children include physical therapy, surgery (orthopedic and rhizotomy), and orthoses. The orthotic approach is conservative, being reversible, and widespread in many therapeutic regimes. Orthoses are used to improve the gait of children with CP, by preventing deformities, controlling joint position, and offering an effective lever for the ankle joint. Orthoses are prescribed for the additional aims of increasing walking speed, improving stability, preventing stumbling, and decreasing muscular fatigue. The ankle-foot orthosis (AFO), with a rigid ankle, are primarily designed to prevent equinus and other foot deformities with a positive effect also on more proximal joints. However, AFOs prevent the natural excursion of the tibio-tarsic joint during the second rocker, hence hampering the natural leaning progression of the whole body under the effect of the inertia (6). A new modular (submalleolar) astragalus-calcanear orthosis, named OMAC, has recently been proposed with the intention of substituting the prescription of AFOs in those CP children exhibiting a flat and valgus-pronated foot. The aim of this section is thus to present the mechanical and technical features of the OMAC by means of an accurate description of the device. In particular, the integral document of the deposited Italian patent, is provided. A preliminary validation of OMAC with respect to AFO is also reported as resulted from an experimental campaign on diplegic CP children, during a three month period, aimed at quantitatively assessing the benefit provided by the two orthoses on walking and at qualitatively evaluating the changes in the quality of life and motor abilities. As already stated, CP is universally considered as a persistent but not unchangeable disorder of posture and movement. Conversely to this definition, some clinicians (4) have recently pointed out that movement disorders may be primarily caused by the presence of perceptive disorders, where perception is not merely the acquisition of sensory information, but an active process aimed at guiding the execution of movements through the integration of sensory information properly representing the state of one’s body and of the environment. Children with perceptive impairments show an overall fear of moving and the onset of strongly unnatural walking schemes directly caused by the presence of perceptive system disorders. The fourth section of the thesis thus deals with accurately defining the perceptive impairment exhibited by diplegic CP children. A detailed description of the clinical signs revealing the presence of the perceptive impairment, and a classification scheme of the clinical aspects of perceptual disorders is provided. In the end, a functional reaching test is proposed as an instrumental test able to disclosure the perceptive impairment. References 1. Prevalence and characteristics of children with cerebral palsy in Europe. Dev Med Child Neurol. 2002 Set;44(9):633-640. 2. Bax M, Goldstein M, Rosenbaum P, Leviton A, Paneth N, Dan B, et al. Proposed definition and classification of cerebral palsy, April 2005. Dev Med Child Neurol. 2005 Ago;47(8):571-576. 3. Ingram TT. A study of cerebral palsy in the childhood population of Edinburgh. Arch. Dis. Child. 1955 Apr;30(150):85-98. 4. Ferrari A, Cioni G. The spastic forms of cerebral palsy : a guide to the assessment of adaptive functions. Milan: Springer; 2009. 5. Olney SJ, Wright MJ. Cerebral Palsy. Campbell S et al. Physical Therapy for Children. 2nd Ed. Philadelphia: Saunders. 2000;:533-570. 6. Desloovere K, Molenaers G, Van Gestel L, Huenaerts C, Van Campenhout A, Callewaert B, et al. How can push-off be preserved during use of an ankle foot orthosis in children with hemiplegia? A prospective controlled study. Gait Posture. 2006 Ott;24(2):142-151.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present Thesis studies three alternative solvent groups as sustainable replacement of traditional organic solvents. Some aspects of fluorinated solvents, supercritical fluids and ionic liquids, have been analysed with a critical approach and their effective “greenness” has been evaluated from the points of view of the synthesis, the properties and the applications. In particular, the attention has been put on the environmental and human health issues, evaluating the eco-toxicity, the toxicity and the persistence, to underline that applicability and sustainability are subjects with equal importance. The “green” features of fluorous solvents and supercritical fluids are almost well-established; in particular supercritical carbon dioxide (scCO2) is probably the “greenest” solvent among the alternative solvent systems developed in the last years, enabling to combine numerous advantages both from the point of view of industrial/technological applications and eco-compatibility. In the Thesis the analysis of these two classes of alternative solvents has been mainly focused on their applicability, rather than the evaluation of their environmental impact. Specifically they have been evaluated as alternative media for non-aqueous biocatalysis. For this purpose, the hydrophobic ion pairing (HIP), which allows solubilising enzymes in apolar solvents by an ion pairing between the protein and a surfactant, has been investigated as effective enzymatic derivatisation technique to improve the catalytic activity under homogeneous conditions in non conventional media. The results showed that the complex enzyme-surfactant was much more active both in fluorous solvents and in supercritical carbon dioxide than the native form of the enzyme. Ionic liquids, especially imidazolium salts, have been proposed some years ago as “fully green” alternative solvents; however this epithet does not take into account several “brown” aspects such as their synthesis from petro-chemical starting materials, their considerable eco-toxicity, toxicity and resistance to biodegradation, and the difficulty of clearly outline applications in which ionic liquids are really more advantageous than traditional solvents. For all of these reasons in this Thesis a critical analysis of ionic liquids has been focused on three main topics: i) alternative synthesis by introducing structural moieties which could reduce the toxicity of the most known liquid salts, and by using starting materials from renewable resources; ii) on the evaluation of their environmental impact through eco-toxicological tests (Daphnia magna and Vibrio fischeri acute toxicity tests, and algal growth inhibition), toxicity tests (MTT test, AChE inhibition and LDH release tests) and fate and rate of aerobic biodegradation in soil and water; iii) and on the demonstration of their effectiveness as reaction media in organo-catalysis and as extractive solvents in the recovery of vegetable oil from terrestrial and aquatic biomass. The results about eco-toxicity tests with Daphnia magna, Vibrio fischeri and algae, and toxicity assay using cultured cell lines, clearly indicate that the difference in toxicity between alkyl and oxygenated cations relies in differences of polarity, according to the general trend of decreasing toxicity by decreasing the lipophilicity. Independently by the biological approach in fact, all the results are in agreement, showing a lower toxicity for compounds with oxygenated lateral chains than for those having purely alkyl lateral chains. These findings indicate that an appropriate choice of cation and anion structures is important not only to design the IL with improved and suitable chemico-physical properties but also to obtain safer and eco-friendly ILs. Moreover there is a clear indication that the composition of the abiotic environment has to be taken into account when the toxicity of ILs in various biological test systems is analysed, because, for example, the data reported in the Thesis indicate a significant influence of salinity variations on algal toxicity. Aerobic biodegradation of four imidazolium ionic liquids, two alkylated and two oxygenated, in soil was evaluated for the first time. Alkyl ionic liquids were shown to be biodegradable over the 6 months test period, and in contrast no significant mineralisation was observed with oxygenated derivatives. A different result was observed in the aerobic biodegradation of alkylated and oxygenated pyridinium ionic liquids in water because all the ionic liquids were almost completely degraded after 10 days, independently by the number of oxygen in the lateral chain of the cation. The synthesis of new ionic liquids by using renewable feedstock as starting materials, has been developed through the synthesis of furan-based ion pairs from furfural. The new ammonium salts were synthesised in very good yields, good purity of the products and wide versatility, combining low melting points with high decomposition temperatures and reduced viscosities. Regarding the possible applications as surfactants and biocides, furan-based salts could be a valuable alternative to benzyltributylammonium salts and benzalkonium chloride that are produced from non-renewable resources. A new procedure for the allylation of ketones and aldehydes with tetraallyltin in ionic liquids was developed. The reaction afforded high yields both in sulfonate-containing ILs and in ILs without sulfonate upon addition of a small amount of sulfonic acid. The checked reaction resulted in peculiar chemoselectivity favouring aliphatic substrates towards aromatic ketones and good stereoselectivity in the allylation of levoglucosenone. Finally ILs-based systems could be easily and successfully recycled, making the described procedure environmentally benign. The potential role of switchable polarity solvents as a green technology for the extraction of vegetable oil from terrestrial and aquatic biomass has been investigated. The extraction efficiency of terrestrial biomass rich in triacylglycerols, as soy bean flakes and sunflower seeds, was comparable to those of traditional organic solvents, being the yield of vegetable oils recovery very similar. Switchable polarity solvents as been also exploited for the first time in the extraction of hydrocarbons from the microalga Botryococcus braunii, demonstrating the efficiency of the process for the extraction of both dried microalgal biomass and directly of the aqueous growth medium. The switchable polarity solvents exhibited better extraction efficiency than conventional solvents, both with dried and liquid samples. This is an important issue considering that the harvest and the dewatering of algal biomass have a large impact on overall costs and energy balance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heat treatment of steels is a process of fundamental importance in tailoring the properties of a material to the desired application; developing a model able to describe such process would allow to predict the microstructure obtained from the treatment and the consequent mechanical properties of the material. A steel, during a heat treatment, can undergo two different kinds of phase transitions [p.t.]: diffusive (second order p.t.) and displacive (first order p.t.); in this thesis, an attempt to describe both in a thermodynamically consistent framework is made; a phase field, diffuse interface model accounting for the coupling between thermal, chemical and mechanical effects is developed, and a way to overcome the difficulties arising from the treatment of the non-local effects (gradient terms) is proposed. The governing equations are the balance of linear momentum equation, the Cahn-Hilliard equation and the balance of internal energy equation. The model is completed with a suitable description of the free energy, from which constitutive relations are drawn. The equations are then cast in a variational form and different numerical techniques are used to deal with the principal features of the model: time-dependency, non-linearity and presence of high order spatial derivatives. Simulations are performed using DOLFIN, a C++ library for the automated solution of partial differential equations by means of the finite element method; results are shown for different test-cases. The analysis is reduced to a two dimensional setting, which is simpler than a three dimensional one, but still meaningful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD Thesis is part of a long-term wide research project, carried out by the "Osservatorio Astronomico di Bologna (INAF-OABO)", that has as primary goal the comprehension and reconstruction of formation mechanism of galaxies and their evolution history. There is now substantial evidence, both from theoretical and observational point of view, in favor of the hypothesis that the halo of our Galaxy has been at least partially, built up by the progressive accretion of small fragments, similar in nature to the present day dwarf galaxies of the Local Group. In this context, the photometric and spectroscopic study of systems which populate the halo of our Galaxy (i.e. dwarf spheroidal galaxy, tidal streams, massive globular cluster, etc) permits to discover, not only the origin and behaviour of these systems, but also the structure of our Galactic halo, combined with its formation history. In fact, the study of the population of these objects and also of their chemical compositions, age, metallicities and velocity dispersion, permit us not only an improvement in the understanding of the mechanisms that govern the Galactic formation, but also a valid indirect test for cosmological model itself. Specifically, in this Thesis we provided a complete characterization of the tidal Stream of the Sagittarius dwarf spheroidal galaxy, that is the most striking example of the process of tidal disruption and accretion of a dwarf satellite in to our Galaxy. Using Red Clump stars, extracted from the catalogue of the Sloan Digital Sky Survey (SDSS) we obtained an estimate of the distance, the depth along the line of sight and of the number density for each detected portion of the Stream (and more in general for each detected structure along our line of sight). Moreover comparing the relative number (i.e. the ratio) of Blue Horizontal Branch stars and Red Clump stars (the two features are tracers of different age/different metallicity populations) in the main body of the galaxy and in the Stream, in order to verify the presence of an age-metallicity gradient along the Stream. We also report the detection of a population of Red Clump stars probably associated with the recently discovered Bootes III stellar system. Finally, we also present the results of a survey of radial velocities over a wide region, extending from r ~ 10' out to r ~ 80' within the massive star cluster Omega Centauri. The survey was performed with FLAMES@VLT, to study the velocity dispersion profile in the outer regions of this stellar system. All the results presented in this Thesis, have already been published in refeered journals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La caratterizzazione di sedimenti contaminati è un problema complesso, in questo lavoro ci si è occupati di individuare una metodologia di caratterizzazione che tenesse conto sia delle caratteristiche della contaminazione, con analisi volte a determinare il contenuto totale di contaminanti, sia della mobilità degli inquinanti stessi. Una adeguata strategia di caratterizzazione può essere applicata per la valutazione di trattamenti di bonifica, a questo scopo si è valutato il trattamento di soil washing, andando ad indagare le caratteristiche dei sedimenti dragati e del materiale in uscita dal processo, sabbie e frazione fine, andando inoltre a confrontare le caratteristiche della sabbia in uscita con quelle delle sabbie comunemente usate per diverse applicazioni. Si è ritenuto necessario indagare la compatibilità dal punto di vista chimico, granulometrico e morfologico. Per indagare la mobilità si è scelto di applicare i test di cessione definiti sia a livello internazionale che italiano (UNI) e quindi si sono sviluppate le tecnologie necessarie alla effettuazione di test di cessione in modo efficace, automatizzando la gestione del test a pHstat UNI CEN 14997. Questo si è reso necessario a causa della difficoltà di gestire il test manualmente, per via delle tempistiche difficilmente attuabili da parte di un operatore. Le condizioni redox influenzano la mobilità degli inquinanti, in particolare l’invecchiamento all’aria di sedimenti anossici provoca variazioni sensibili nello stato d’ossidazione di alcune componenti, incrementandone la mobilità, si tratta quindi di un aspetto da considerare quando si individuano le adeguate condizioni di stoccaggio-smaltimento, si è eseguita a questo scopo una campagna sperimentale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chlorinated solvents are the most ubiquitous organic contaminants found in groundwater since the last five decades. They generally reach groundwater as Dense Non-Aqueous Phase Liquid (DNAPL). This phase can migrate through aquifers, and also through aquitards, in ways that aqueous contaminants cannot. The complex phase partitioning to which chlorinated solvent DNAPLs can undergo (i.e. to the dissolved, vapor or sorbed phase), as well as their transformations (e.g. degradation), depend on the physico-chemical properties of the contaminants themselves and on features of the hydrogeological system. The main goal of the thesis is to provide new knowledge for the future investigations of sites contaminated by DNAPLs in alluvial settings, proposing innovative investigative approaches and emphasizing some of the key issues and main criticalities of this kind of contaminants in such a setting. To achieve this goal, the hydrogeologic setting below the city of Ferrara (Po plain, northern Italy), which is affected by scattered contamination by chlorinated solvents, has been investigated at different scales (regional and site specific), both from an intrinsic (i.e. groundwater flow systems) and specific (i.e. chlorinated solvent DNAPL behavior) point of view. Detailed investigations were carried out in particular in one selected test-site, known as “Caretti site”, where high-resolution vertical profiling of different kind of data were collected by means of multilevel monitoring systems and other innovative sampling and analytical techniques. This allowed to achieve a deep geological and hydrogeological knowledge of the system and to reconstruct in detail the architecture of contaminants in relationship to the features of the hosting porous medium. The results achieved in this thesis are useful not only at local scale, e.g. employable to interpret the origin of contamination in other sites of the Ferrara area, but also at global scale, in order to address future remediation and protection actions of similar hydrogeologic settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maleic anhydride is an important chemical intermediate mainly produced by the selective oxidation of n-butane, an industrial process catalyzed by vanadyl pyrophosphate-based materials, (VO)2P2O7. The first topic was investigated in collaboration with a company specialized in the production of organic anhydrides (Polynt SpA), with the aim of improving the performance of the process for the selective oxidation of n-butane to maleic anhydride, comparing the behavior of an industrial vanadyl pyrophosphate catalysts when utilized either in the industrial plant or in lab-scale reactor. The study was focused on how the catalyst characteristics and reactivity are affected by the reaction conditions and how the addition of a dopant can enhance the catalytic performance. Moreover, the ageing of the catalyst was studied, in order to correlate the deactivation process with the modifications occurring in the catalyst. The second topic was produced within the Seventh Framework (FP7) European Project “EuroBioRef”. The study was focused on a new route for the synthesis of maleic anhydride starting from an alternative reactant produced by fermentation of biomass:“bio-1-butanol”. In this field, the different possible catalytic configurations were investigated: the process was divided into two main reactions, the dehydration of 1-butanol to butenes and the selective oxidation of butenes to maleic anhydride. The features needed to catalyze the two steps were analyzed and different materials were proposed as catalysts, namely Keggin-type polyoxometalates, VOPO4∙2H2O and (VO)2P2O7. The reactivity of 1-butanol was tested under different conditions, in order to optimize the performance and understand the nature of the interaction between the alcohol and the catalyst surface. Then, the key intermediates in the mechanism of 1-butanol oxidehydration to MA were studied, with the aim of understanding the possible reaction mechanism. Lastly, the reactivity of the chemically sourced 1-butanol was compared with that one of different types of bio-butanols produced by biomass fermentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Only few standardized apraxia scales are available and they do not cover all domains and semantic features of gesture production. Therefore, the objective of the present study was to evaluate the reliability and validity of a newly developed test of upper limb apraxia (TULIA), which is comprehensive and still short to administer. METHODS: The TULIA consists of 48 items including imitation and pantomime domain of non-symbolic (meaningless), intransitive (communicative) and transitive (tool related) gestures corresponding to 6 subtests. A 6-point scoring method (0-5) was used (score range 0-240). Performance was assessed by blinded raters based on videos in 133 stroke patients, 84 with left hemisphere damage (LHD) and 49 with right hemisphere damage (RHD), as well as 50 healthy subjects (HS). RESULTS: The clinimetric findings demonstrated mostly good to excellent internal consistency, inter- and intra-rater (test-retest) reliability, both at the level of the six subtests and at individual item level. Criterion validity was evaluated by confirming hypotheses based on the literature. Construct validity was demonstrated by a high correlation (r = 0.82) with the De Renzi-test. CONCLUSION: These results show that the TULIA is both a reliable and valid test to systematically assess gesture production. The test can be easily applied and is therefore useful for both research purposes and clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Visuoperceptual deficits in dementia are common and can reduce quality of life. Testing of visuoperceptual function is often confounded by impairments in other cognitive domains and motor dysfunction. We aimed to develop, pilot, and test a novel visuocognitive prototype test battery which addressed these issues, suitable for both clinical and functional imaging use. Methods: We recruited 23 participants (14 with dementia, 6 of whom had extrapyramidal motor features, and 9 age-matched controls). The novel Newcastle visual perception prototype battery (NEVIP-B-Prototype) included angle, color, face, motion and form perception tasks, and an adapted response system. It allows for individualized task difficulties. Participants were tested outside and inside the 3T functional magnetic resonance imaging (fMRI) scanner. Functional magnetic resonance imaging data were analyzed using SPM8. Results: All participants successfully completed the task inside and outside the scanner. Functional magnetic resonance imaging analysis showed activation regions corresponding well to the regional specializations of the visual association cortex. In both groups, there was significant activity in the ventral occipital-temporal region in the face and color tasks, whereas the motion task activated the V5 region. In the control group, the angle task activated the occipitoparietal cortex. Patients and controls showed similar levels of activation, except on the angle task for which occipitoparietal activation was lower in patients than controls. Conclusion: Distinct visuoperceptual functions can be tested in patients with dementia and extrapyramidal motor features when tests use individualized thresholds, adapted tasks, and specialized response systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ASTM A529 carbon¿manganese steel angle specimens were joined by flash butt welding and the effects of varying process parameter settings on the resulting welds were investigated. The weld metal and heat affected zones were examined and tested using tensile testing, ultrasonic scanning, Rockwell hardness testing, optical microscopy, and scanning electron microscopy with energy dispersive spectroscopy in order to quantify the effect of process variables on weld quality. Statistical analysis of experimental tensile and ultrasonic scanning data highlighted the sensitivity of weld strength and the presence of weld zone inclusions and interfacial defects to the process factors of upset current, flashing time duration, and upset dimension. Subsequent microstructural analysis revealed various phases within the weld and heat affected zone, including acicular ferrite, Widmanstätten or side-plate ferrite, and grain boundary ferrite. Inspection of the fracture surfaces of multiple tensile specimens, with scanning electron microscopy, displayed evidence of brittle cleavage fracture within the weld zone for certain factor combinations. Test results also indicated that hardness was increased in the weld zone for all specimens, which can be attributed to the extensive deformation of the upset operation. The significance of weld process factor levels on microstructure, fracture characteristics, and weld zone strength was analyzed. The relationships between significant flash welding process variables and weld quality metrics as applied to ASTM A529-Grade 50 steel angle were formalized in empirical process models.