918 resultados para maximum contrast analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated the seasonal patterns of Amazonian forest photosynthetic activity, and the effects thereon of variations in climate and land-use, by integrating data from a network of ground-based eddy flux towers in Brazil established as part of the ‘Large-Scale Biosphere Atmosphere Experiment in Amazonia’ project. We found that degree of water limitation, as indicated by the seasonality of the ratio of sensible to latent heat flux (Bowen ratio) predicts seasonal patterns of photosynthesis. In equatorial Amazonian forests (5◦ N–5◦ S), water limitation is absent, and photosynthetic fluxes (or gross ecosystem productivity, GEP) exhibit high or increasing levels of photosynthetic activity as the dry season progresses, likely a consequence of allocation to growth of new leaves. In contrast, forests along the southern flank of the Amazon, pastures converted from forest, and mixed forest-grass savanna, exhibit dry-season declines in GEP, consistent with increasing degrees of water limitation. Although previous work showed tropical ecosystem evapotranspiration (ET) is driven by incoming radiation, GEP observations reported here surprisingly show no or negative relationships with photosynthetically active radiation (PAR). Instead, GEP fluxes largely followed the phenology of canopy photosynthetic capacity (Pc), with only deviations from this primary pattern driven by variations in PAR. Estimates of leaf flush at three

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most typical maximum tests for measuring leg muscle performance are the one-repetition maximum leg press test (1RMleg) and the isokinetic knee extension/flexion (IKEF) test. Nevertheless, their inter-correlations have not been well documented, mainly the predicted values of these evaluations. This correlational and regression analysis study involved 30 healthy young males aged 18-24y, who have performed both tests. Pearson's product moment correlation between 1RMleg and IKEF varied from 0.20 to 0.69 and the more exact predicted test was to 1RMleg (R2 = 0.71). The study showed correlations between 1RMleg and IKEF although these tests are different (isotonic vs. isokinetic) and provided further support for cross determination of 1RMleg and IKEF by linear and multiple linear regression analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In martial arts there are several ways to perform the turning kick . Following the martial arts or different learning models many types of kicks take shape. Mawashi geri is the karate turning kick. At the moment there are two models of mawashi geri, one comes from the traditional karate (OLD), and the other newer (NEW), who agrees to the change of the rules of W.K.F. (World Karate Federation) happened in 2000 (Macan J. et all 2006) . In this study we are focus on the differences about two models the mawashi geri jodan of karate. The purpose of this study is to analyse cinematic and kinetic parameters of mawashi geri jodan. Timing of the striking and supporting leg actions were also evaluated A Vicon system 460 IR with 6 cameras at sample frequency of 200 Hz was used. 37 reflective markers have been set on the skin of the subjects following the “PlugInGait-total body model”. The participants performed five repetitions of mawashi geri jodan at maximum rapidity with their dominant leg against a ball suspended in front of them placed at ear height. Fourteen skilled subjects (mean level black belt 1,7 dan; age 20,9±4,8 yrs; height 171,4±7,3 cm; weight 60,9±10,2 Kg) practicing karate have been split in two group through the hierarchical cluster analysis following their technical characteristics. By means of the Mann Whitney-U test (Spss-package) the differences between the two groups were verified in preparatory and execution phase. Kicking knee at start, kicking hip and knee at take-off were different between the two groups (p < 0,05). Striking hip flexion during the spin of the supporting foot was different between the two groups (p < 0,05). Peak angular velocity of hip flexion were different between the two groups (p < 0,05). Groups showed differences also in timing of the supporting spin movement. While Old group spin the supporting foot at 30% of the trial, instead New start spinning at 44% of the trial. Old group showed a greater supporting foot spin than New (Old 110° Vs New 82°). Abduction values didn’t show any differences between the two groups. At the hit has been evaluated a 120° of double hips abduction, for the entire sample. Striking knee extension happened for everybody after the kicking hip flexion and confirm the proximal-distal action of the striking leg (Sorensen H. 1996). In contrast with Pearson J.N. 1997 and Landeo R 2007, peak velocity of the striking foot is not useful to describe kick performance because affected by the stature. Two groups are different either in preparatory phase or in execution phase. The body is set in difference manner already before the take-off of the kicking foot. The groups differ for the timing of the supporting foot action Trainer should pay attention to starting posture and on abduction capacities of the athletes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis introduces new processing techniques for computer-aided interpretation of ultrasound images with the purpose of supporting medical diagnostic. In terms of practical application, the goal of this work is the improvement of current prostate biopsy protocols by providing physicians with a visual map overlaid over ultrasound images marking regions potentially affected by disease. As far as analysis techniques are concerned, the main contributions of this work to the state-of-the-art is the introduction of deconvolution as a pre-processing step in the standard ultrasonic tissue characterization procedure to improve the diagnostic significance of ultrasonic features. This thesis also includes some innovations in ultrasound modeling, in particular the employment of a continuous-time autoregressive moving-average (CARMA) model for ultrasound signals, a new maximum-likelihood CARMA estimator based on exponential splines and the definition of CARMA parameters as new ultrasonic features able to capture scatterers concentration. Finally, concerning the clinical usefulness of the developed techniques, the main contribution of this research is showing, through a study based on medical ground truth, that a reduction in the number of sampled cores in standard prostate biopsy is possible, preserving the same diagnostic power of the current clinical protocol.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis two major topics inherent with medical ultrasound images are addressed: deconvolution and segmentation. In the first case a deconvolution algorithm is described allowing statistically consistent maximum a posteriori estimates of the tissue reflectivity to be restored. These estimates are proven to provide a reliable source of information for achieving an accurate characterization of biological tissues through the ultrasound echo. The second topic involves the definition of a semi automatic algorithm for myocardium segmentation in 2D echocardiographic images. The results show that the proposed method can reduce inter- and intra observer variability in myocardial contours delineation and is feasible and accurate even on clinical data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phylogeography is a recent field of biological research that links phylogenetics to biogeography through deciphering the imprint that evolutionary history has left on the genetic structure of extant populations. During the cold phases of the successive ice ages, which drastically shaped species’ distributions since the Pliocene, populations of numerous species were isolated in refugia where many of them evolved into different genetic lineages. My dissertation deals with the phylogeography of the Woodland Ringlet (Erebia medusa [Denis and Schiffermüller] 1775) in Central and Eastern Europe. This Palaearctic butterfly species is currently distributed from central France and south eastern Belgium over large parts of Central Europe and southern Siberia to the Pacific. It is absent from those parts of Europe with mediterranean, oceanic and boreal climates. It was supposed to be a Siberian faunal element with a rather homogeneous population structure in Central Europe due to its postglacial expansion out of a single eastern refugium. An already existing evolutionary scenario for the Woodland Ringlet in Central and Eastern Europe is based on nuclear data (allozymes). To know if this is corroborated by organelle evolutionary history, I sequenced two mitochondrial markers (part of the cytochrome oxydase subunit one and the control region) for populations sampled over the same area. Phylogeography largely relies on the construction of networks of uniparentally inherited haplotypes that are compared to geographic haplotype distribution thanks to recent developed methods such as nested clade phylogeographic analysis (NCPA). Several ring-shaped ambiguities (loops) emerged from both haplotype networks in E. medusa. They can be attributed to recombination and homoplasy. Such loops usually avert the straightforward extraction of the phylogeographic signal contained in a gene tree. I developed several new approaches to extract phylogeographic information in the presence of loops, considering either homoplasy or recombination. This allowed me to deduce a consistent evolutionary history for the species from the mitochondrial data and also adds plausibility for the occurrence of recombination in E. medusa mitochondria. Despite the fact that the control region is assumed to have a lack of resolving power in other species, I found a considerable genetic variation of this marker in E. medusa which makes it a useful tool for phylogeographic studies. In combination with the allozyme data, the mitochondrial genome supports the following phylogeographic scenario for E. medusa in Europe: (i) a first vicariance, due to the onset of the Würm glaciation, led to the formation of several major lineages, and is mirrored in the NCPA by restricted gene flow, (ii) later on further vicariances led to the formation of two sub-lineages in the Western lineage and two sub-lineages in the Eastern lineage during the Last Glacial Maximum or Older Dryas; additionally the NCPA supports a restriction of gene flow with isolation by distance, (iii) finally, vicariance resulted in two secondary sub-lineages in the area of Germany and, maybe, to two other secondary sub-lineages in the Czech Republic. The last postglacial warming was accompanied by strong range expansions in most of the genetic lineages. The scenario expected for a presumably Siberian faunal element such as E. medusa is a continuous loss of genetic diversity during postglacial westward expansion. Hence, the pattern found in this thesis contradicts a typical Siberian origin of E. medusa. In contrast, it corroboratess the importance of multiple extra-Mediterranean refugia for European fauna as it was recently assumed for other continental species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characteristics of aphasics’ speech in various languages have been the core of numerous studies, but Arabic in general, and Palestinian Arabic in particular, is still a virgin field in this respect. However, it is of vital importance to have a clear picture of the specific aspects of Palestinian Arabic that might be affected in the speech of aphasics in order to establish screening, diagnosis and therapy programs based on a clinical linguistic database. Hence the central questions of this study are what are the main neurolinguistic features of the Palestinian aphasics’ speech at the phonetic-acoustic level and to what extent are the results similar or not to those obtained from other languages. In general, this study is a survey of the most prominent features of Palestinian Broca’s aphasics’ speech. The main acoustic parameters of vowels and consonants are analysed such as vowel duration, formant frequency, Voice Onset Time (VOT), intensity and frication duration. The deviant patterns among the Broca’s aphasics are displayed and compared with those of normal speakers. The nature of deficit, whether phonetic or phonological, is also discussed. Moreover, the coarticulatory characteristics and some prosodic patterns of Broca’s aphasics are addressed. Samples were collected from six Broca’s aphasics from the same local region. The acoustic analysis conducted on a range of consonant and vowel parameters displayed differences between the speech patterns of Broca’s aphasics and normal speakers. For example, impairments in voicing contrast between the voiced and voiceless stops were found in Broca’s aphasics. This feature does not exist for the fricatives produced by the Palestinian Broca’s aphasics and hence deviates from data obtained for aphasics’ speech from other languages. The Palestinian Broca’s aphasics displayed particular problems with the emphatic sounds. They exhibited deviant coarticulation patterns, another feature that is inconsistent with data obtained from studies from other languages. However, several other findings are in accordance with those reported from various other languages such as impairments in the VOT. The results are in accordance with the suggestions that speech production deficits in Broca’s aphasics are not related to phoneme selection but rather to articulatory implementation and some speech output impairments are related to timing and planning deficits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eine zielgerichtete Steuerung und Durchführung von organischen Festkörperreaktionen wird unter anderem durch genaue Kenntnis von Packungseffekten ermöglicht. Im Rahmen dieser Arbeit konnte durch den kombinierten Einsatz von Einkristallröntgenanalyse und hochauf-lösender Festkörper-NMR an ausgewählten Beispielen ein tieferes Verständnis und Einblicke in die Reaktionsmechanismen von organischen Festkörperreaktionen auf molekularer Ebene gewonnen werden. So konnten bei der topotaktischen [2+2] Photodimerisierung von Zimt-säure Intermediate isoliert und strukturell charakterisiert werden. Insbesondere anhand statischer Deuteronen- und 13C-CPMAS NMR Spektren konnten eindeutig dynamische Wasserstoffbrücken nachgewiesen werden, die transient die Zentrosymmetrie des Reaktions-produkts aufheben. Ein weiterer Nachweis gelang daraufhin mittels Hochtemperatur-Röntgen-untersuchung, sodass der scheinbare Widerspruch von NMR- und Röntgenuntersuchungen gelöst werden konnte. Eine Veresterung der Zimtsäure entfernt diese Wasserstoffbrücken und erhält somit die Zentrosymmetrie des Photodimers. Weiterhin werden Ansätze zur Strukturkontrolle in Festkörpern basierend auf der molekularen Erkennung des Hydroxyl-Pyridin (OH-N) Heterosynthon in Co-Kristallen beschrieben, wobei vor allem die Stabilität des Synthons in Gegenwart funktioneller Gruppen mit Möglichkeit zu kompetetiver Wasserstoffbrückenbildung festgestellt wurde. Durch Erweiterung dieses Ansatzes wurde die molekulare Spezifität des Hydroxyl-Pyridin (OH-N) Heterosynthons bei gleichzeitiger Co-Kristallisation mit mehreren Komponenten erfolgreich aufgezeigt. Am Beispiel der Co-Kristallisation von trans--1,2-bis(4-pyridyl)ethylen (bpe) mit Resorcinol (res) in Gegenwart von trans-1,2-bis(4-pyridyl)ethan (bpet) konnten Zwischenprodukte der Fest-körperreaktionen und neuartige Polymorphe isoliert werden, wobei eine lückenlose Aufklärung des Reaktionswegs mittels Röntgenanalyse gelang. Dabei zeigte sich, dass das Templat Resorcinol aus den Zielverbindungen entfernbar ist. Ferner gelang die Durchführung einer seltenen, nicht-idealen Einkristall-Einkristall-Umlagerung von trans--1,2-bis(4-pyridyl)ethylen (bpe) mit Resorcinol (res). In allen Fällen konnten die Fragen zur Struktur und Dynamik der untersuchten Verbindungen nur durch gemeinsame Nutzung von Röntgenanalyse und NMR-Spektroskopie bei vergleichbaren Temperaturen eindeutig und umfassend geklärt werden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quest for universal memory is driving the rapid development of memories with superior all-round capabilities in non-volatility, high speed, high endurance and low power. The memory subsystem accounts for a significant cost and power budget of a computer system. Current DRAM-based main memory systems are starting to hit the power and cost limit. To resolve this issue the industry is improving existing technologies such as Flash and exploring new ones. Among those new technologies is the Phase Change Memory (PCM), which overcomes some of the shortcomings of the Flash such as durability and scalability. This alternative non-volatile memory technology, which uses resistance contrast in phase-change materials, offers more density relative to DRAM, and can help to increase main memory capacity of future systems while remaining within the cost and power constraints. Chalcogenide materials can suitably be exploited for manufacturing phase-change memory devices. Charge transport in amorphous chalcogenide-GST used for memory devices is modeled using two contributions: hopping of trapped electrons and motion of band electrons in extended states. Crystalline GST exhibits an almost Ohmic I(V) curve. In contrast amorphous GST shows a high resistance at low biases while, above a threshold voltage, a transition takes place from a highly resistive to a conductive state, characterized by a negative differential-resistance behavior. A clear and complete understanding of the threshold behavior of the amorphous phase is fundamental for exploiting such materials in the fabrication of innovative nonvolatile memories. The type of feedback that produces the snapback phenomenon is described as a filamentation in energy that is controlled by electron–electron interactions between trapped electrons and band electrons. The model thus derived is implemented within a state-of-the-art simulator. An analytical version of the model is also derived and is useful for discussing the snapback behavior and the scaling properties of the device.