9 resultados para Denominator neglect
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
As long as the incidence of stroke continues to grow, patients with large right hemisphere lesions suffering from hemispatial neglect will require neuropsychological evaluation and rehabilitation. The inability to process information especially that coming from the left side accompanied by the magnetic orientation to the ipsilesional side represents a real challenge for rehabilitation. This dissertation is concerned with crucial aspects in the clinical neuropsychological practice of hemispatial neglect. In studying the convergence of the visual and behavioural test batteries in the assessment of neglect, nine of the seventeen patients, who completed both the conventional subtests of the Behavioural Inattention Test and the Catherine Bergego Scale assessments, showed a similar severity of neglect and thus good convergence in both tests. However, patients with neglect and hemianopia had poorer scores in the line bisection test and they displayed stronger neglect in behaviour than patients with pure neglect. The second study examined, whether arm activation, modified from the Constraint Induced Movement Therapy, could be applied as neglect rehabilitation alone without any visual training. Twelve acute- or subacute patients were randomized into two rehabilitation groups: arm activation training or traditional voluntary visual scanning training. Neglect was ameliorated significantly or almost significantly in both training groups due to rehabilitation with the effect being maintained for at least six months. In studying the reflections of hemispatial neglect on visual memory, the associations of severity of neglect and visual memory performances were explored. The performances of acute and subacute patients with hemispatial neglect were compared with the performances of matched healthy control subjects. As hypothesized, encoding from the left side and immediate recall of visual material were significantly compromised in patients with neglect. Another mechanism of neglect affecting visual memory processes is observed in delayed visual reproduction. Delayed recall demands that the individual must make a match helped by a cue or it requires a search for relevant material from long-term memory storage. In the case of representational neglect, the search may succeed but the left side of the recollected memory still fails to open. Visual and auditory evoked potentials were measured in 21 patients with hemispatial neglect. Stimuli coming from the left or right were processed differently in both sensory modalities in acute and subacute patients as compared with the chronic patients. The differences equalized during the course of recovery. Recovery from hemispatial neglect was strongly associated with early rehabilitation and with the severity of neglect. Extinction was common in patients with neglect and it did not ameliorate with the recovery of neglect. The presence of pusher symptom hampered amelioration of visual neglect in acute and subacute stroke patients, whereas depression did not have any significant effect in the early phases after the stroke. However, depression had an unfavourable effect on recovery in the chronic phase. In conclusion, the combination of neglect and hemianopia may explain part of the residual behavioural neglect that is no longer evident in visual testing. Further research is needed in order to determine which specific rehabilitation procedures would be most beneficial in patients suffering the combination of neglect and hemianopia. Arm activation should be included in the rehabilitation programs of neglect; this is a useful technique for patients who need bedside treatment in the acute phase. With respect to the deficit in visual memory in association with neglect, the possible mechanisms of lateralized deficit in delayed recall need to be further examined and clarified. Intensive treatment induced recovery in both severe and moderate visual neglect long after the first two to first three months after the stroke.
Resumo:
Stable isotope fractionation analysis of contaminants is a promising method for assessing biodegradation of contaminants in natural systems. However, standard procedures to determine stable isotope fractionation factors, so far, neglect the influence of pollutant bioavailability on stable isotope fractionation. On a microscale, bioavailability may vary due to the spatio-temporal variability of local contaminant concentrations, limited effective diffusivities of the contaminants and cell densities, and thus, the pollutant supply might not meet the intrinsic degradation capacity of the microorganisms. The aim of this study was to demonstrate the effect of bioavailability on the apparent stable isotope fractionation, using a multiphase laboratory setup. The data gained show that the apparent isotope fractionation factors observed during biodegradation processes depend on the amount of biomass and/or the rate of toluene mass transfer from a second to the aqueous phase. They indicate that physico-chemical processes need to be taken into account when stable isotope fractionation analysis is used for the quantification of environmental contaminant degradation.
Resumo:
During the last few years, the discussion on the marginal social costs of transportation has been active. Applying the externalities as a tool to control transport would fulfil the polluter pays principle and simultaneously create a fair control method between the transport modes. This report presents the results of two calculation algorithms developed to estimate the marginal social costs based on the externalities of air pollution. The first algorithm calculates the future scenarios of sea transport traffic externalities until 2015 in the Gulf of Finland. The second algorithm calculates the externalities of Russian passenger car transit traffic via Finland by taking into account both sea and road transport. The algorithm estimates the ship-originated emissions of carbon dioxide (CO2), nitrogen oxides (NOx), sulphur oxides (SOx), particulates (PM) and the externalities for each year from 2007 to 2015. The total NOx emissions in the Gulf of Finland from the six ship types were almost 75.7 kilotons (Table 5.2) in 2007. The ship types are: passenger (including cruisers and ROPAX vessels), tanker, general cargo, Ro-Ro, container and bulk vessels. Due to the increase of traffic, the estimation for NOx emissions for 2015 is 112 kilotons. The NOx emission estimation for the whole Baltic Sea shipping is 370 kilotons in 2006 (Stipa & al, 2007). The total marginal social costs due to ship-originated CO2, NOx, SOx and PM emissions in the GOF were calculated to almost 175 million Euros in 2007. The costs will increase to nearly 214 million Euros in 2015 due to the traffic growth. The major part of the externalities is due to CO2 emissions. If we neglect the CO2 emissions by extracting the CO2 externalities from the results, we get the total externalities of 57 million Euros in 2007. After eight years (2015), the externalities would be 28 % lower, 41 million Euros (Table 8.1). This is the result of the sulphur emissions reducing regulation of marine fuels. The majority of the new car transit goes through Finland to Russia due to the lack of port capacity in Russia. The amount of cars was 339 620 vehicles (Statistics of Finnish Customs 2008) in 2005. The externalities are calculated for the transportation of passenger vehicles as follows: by ship to a Finnish port and, after that, by trucks to the Russian border checkpoint. The externalities are between 2 – 3 million Euros (year 2000 cost level) for each route. The ports included in the calculations are Hamina, Hanko, Kotka and Turku. With the Euro-3 standard trucks, the port of Hanko would be the best choice to transport the vehicles. This is because of lower emissions by new trucks and the saved transport distance of a ship. If the trucks are more polluting Euro 1 level trucks, the port of Kotka would be the best choice. This indicates that the truck emissions have a considerable effect on the externalities and that the transportation of light cargo, such as passenger cars by ship, produces considerably high emission externalities. The emission externalities approach offers a new insight for valuing the multiple traffic modes. However, the calculation of the marginal social costs based on the air emission externalities should not be regarded as a ready-made calculation system. The system is clearly in the need of some improvement but it can already be considered as a potential tool for political decision making.
Resumo:
Currently, numerous high-throughput technologies are available for the study of human carcinomas. In literature, many variations of these techniques have been described. The common denominator for these methodologies is the high amount of data obtained in a single experiment, in a short time period, and at a fairly low cost. However, these methods have also been described with several problems and limitations. The purpose of this study was to test the applicability of two selected high-throughput methods, cDNA and tissue microarrays (TMA), in cancer research. Two common human malignancies, breast and colorectal cancer, were used as examples. This thesis aims to present some practical considerations that need to be addressed when applying these techniques. cDNA microarrays were applied to screen aberrant gene expression in breast and colon cancers. Immunohistochemistry was used to validate the results and to evaluate the association of selected novel tumour markers with the outcome of the patients. The type of histological material used in immunohistochemistry was evaluated especially considering the applicability of whole tissue sections and different types of TMAs. Special attention was put on the methodological details in the cDNA microarray and TMA experiments. In conclusion, many potential tumour markers were identified in the cDNA microarray analyses. Immunohistochemistry could be applied to validate the observed gene expression changes of selected markers and to associate their expression change with patient outcome. In the current experiments, both TMAs and whole tissue sections could be used for this purpose. This study showed for the first time that securin and p120 catenin protein expression predict breast cancer outcome and the immunopositivity of carbonic anhydrase IX associates with the outcome of rectal cancer. The predictive value of these proteins was statistically evident also in multivariate analyses with up to a 13.1- fold risk for cancer specific death in a specific subgroup of patients.
Resumo:
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Greenhouse gases emitted from energy production and transportation are dramatically changing the climate of Planet Earth. As a consequence, global warming is affecting the living conditions of numerous plant and animal species, including ours. Thus the development of sustainable and renewable liquid fuels is an essential global challenge in order to combat the climate change. In the past decades many technologies have been developed as alternatives to currently used petroleum fuels, such as bioethanol and biodiesel. However, even with gradually increasing production, the market penetration of these first generation biofuels is still relatively small compared to fossil fuels. Researchers have long ago realized that there is a need for advanced biofuels with improved physical and chemical properties compared to bioethanol and with biomass raw materials not competing with food production. Several target molecules have been identified as potential fuel candidates, such as alkanes, fatty acids, long carbon‐chain alcohols and isoprenoids. The current study focuses on the biosynthesis of butanol and propane as possible biofuels. The scope of this research was to investigate novel heterologous metabolic pathways and to identify bottlenecks for alcohol and alkane generation using Escherichia coli as a model host microorganism. The first theme of the work studied the pathways generating butyraldehyde, the common denominator for butanol and propane biosynthesis. Two ways of generating butyraldehyde were described, one via the bacterial fatty acid elongation machinery and the other via partial overexpression of the acetone‐butanol‐ethanol fermentation pathway found in Clostridium acetobutylicum. The second theme of the experimental work studied the reduction of butyraldehyde to butanol catalysed by various bacterial aldehyde‐reductase enzymes, whereas the final part of the work investigated the in vivo kinetics of the cyanobacterial aldehyde deformylating oxygenase (ADO) for the generation of hydrocarbons. The results showed that the novel butanol pathway, based on fatty acid biosynthesis consisting of an acyl‐ACP thioesterase and a carboxylic acid reductase, is tolerant to oxygen, thus being an efficient alternative to the previous Clostridial pathways. It was also shown that butanol can be produced from acetyl‐CoA using acetoacetyl CoA synthase (NphT7) or acetyl‐CoA acetyltransferase (AtoB) enzymes. The study also demonstrated, for the first time, that bacterial biosynthesis of propane is possible. The efficiency of the system is clearly limited by the poor kinetic properties of the ADO enzyme, and for proper function in vivo, the catalytic machinery requires a coupled electron relay system.
Resumo:
Jani E. Kettusen väitöskirja Visual neglect and orienting bias in right hemisphere stroke patients with and without thrombolysis (Tampereeen yliopisto 2013).
Resumo:
Our surrounding landscape is in a constantly dynamic state, but recently the rate of changes and their effects on the environment have considerably increased. In terms of the impact on nature, this development has not been entirely positive, but has rather caused a decline in valuable species, habitats, and general biodiversity. Regardless of recognizing the problem and its high importance, plans and actions of how to stop the detrimental development are largely lacking. This partly originates from a lack of genuine will, but is also due to difficulties in detecting many valuable landscape components and their consequent neglect. To support knowledge extraction, various digital environmental data sources may be of substantial help, but only if all the relevant background factors are known and the data is processed in a suitable way. This dissertation concentrates on detecting ecologically valuable landscape components by using geospatial data sources, and applies this knowledge to support spatial planning and management activities. In other words, the focus is on observing regionally valuable species, habitats, and biotopes with GIS and remote sensing data, using suitable methods for their analysis. Primary emphasis is given to the hemiboreal vegetation zone and the drastic decline in its semi-natural grasslands, which were created by a long trajectory of traditional grazing and management activities. However, the applied perspective is largely methodological, and allows for the application of the obtained results in various contexts. Models based on statistical dependencies and correlations of multiple variables, which are able to extract desired properties from a large mass of initial data, are emphasized in the dissertation. In addition, the papers included combine several data sets from different sources and dates together, with the aim of detecting a wider range of environmental characteristics, as well as pointing out their temporal dynamics. The results of the dissertation emphasise the multidimensionality and dynamics of landscapes, which need to be understood in order to be able to recognise their ecologically valuable components. This not only requires knowledge about the emergence of these components and an understanding of the used data, but also the need to focus the observations on minute details that are able to indicate the existence of fragmented and partly overlapping landscape targets. In addition, this pinpoints the fact that most of the existing classifications are too generalised as such to provide all the required details, but they can be utilized at various steps along a longer processing chain. The dissertation also emphases the importance of landscape history as an important factor, which both creates and preserves ecological values, and which sets an essential standpoint for understanding the present landscape characteristics. The obtained results are significant both in terms of preserving semi-natural grasslands, as well as general methodological development, giving support to science-based framework in order to evaluate ecological values and guide spatial planning.