42 resultados para Primary Drivers

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on a one-year ethnographic study of a primary school in Finland with specialised classes in Finnish and English (referred to as bilingual classes by research participants), this research traces patterns of how nationed, raced, classed and gendered differences are produced and gain meaning in school. I examine several aspects of these differences: the ways the teachers and parents make sense of school and of school choice; the repertoires of self put forward by teachers, parents and pupils of the bilingual classes; and the insitutional and classroom practices in Sunny Lane School (pseudonym). My purpose is to examine how the construction of differentness is related to the policy of school choice. I approach this questions from a knowledge problematic, and explore connections and disjunctions between the interpretations of teachers and those of parents, as well as between what teachers and parents expressed or said and the practices they engaged in. My data consists of fieldnotes generated through a one-year period of ethnographic study in Sunny Lane School, and of ethnographic interviews with teachers and parents primarily of the bilingual classes. This data focuses on the initial stages of the bilingual classes, which included the application and testing processes for these classes, and on Grades 1─3. In my analysis, I pursue poststructural feminist theorisations on questions of knowledge, power and subjectivity, which foreground an understanding of the constitutive force of discourse and the performative, partial, and relational nature of knowledge. I begin by situating my ethnographic field in relation to wider developments, namely, the emergence of school choice and the rhetoric of curricular reform and language education in Finland. I move on from there to ask how teachers discuss the introduction of these specialised classes, then trace pupils paths to these classes, their parents goals related to school choice, teachers constructions of the pupils and parents of bilingual classes, and how these shape the ways in which school and classroom practices unfold. School choice, I argue, functioned as a spatial practice, defining who belongs in school and demarcating the position of teachers, parents and pupils in school. Notions of classed and ethnicised differences entered the ways teachers and parents made sense of school choice. Teachers idealised school in terms of social cohesiveness and constructed social cohesion as a task for school to perform. The hopes parents iterated were connected to ensuring their children s futurity, to their perceptions of the advantages of fluency in English, but also to the differences they believed to exist between the social milieus of different schools. Ideals such as openmindedness and cosmopolitanism were also articulated by parents, and these ideals assumed different content for ethnic majority and minority parents. Teachers discussed the introduction of bilingual classes as being a means to ensure the school s future, and emphasised bilingual classes as fitting into the rubric of Finnish comprehensive schooling which, they maintained, is committed to equality. Parents were expected to accommodate their views and adopt the position of the responsible, supportive parent that was suggested to them by teachers. Teachers assumed a posture teachers of appreciating different cultures, while maintaining Finnishness as common ground in school. Discussion on pupils knowledge and experience of other countries took place often in bilingual classes, and various cultural theme events were organized on occasion. In school, pupils are taught to identify themselves in terms of cultural belonging. The rhetoric promoted by teachers was one of inclusiveness, which was also applied to describe the task of qualifying pupils for bilingual classes, qualifying which pupils can belong. Bilingual classes were idealised as taking a neutral, impartial posture toward difference by ethnic majority teachers and parents, and the relationship of school choice to classed advantage, for example, was something teachers, as well as parents, preferred not to discuss. Pupils were addressed by teachers during lessons in ways that assumed self responsibility and diligence, and they assumed the discursive category of being good, competent pupils made available to them. While this allowed them to position themselves favourably in school, their participation in a bilingual class was marked by the pressure to succeed well in school.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the future the number of the disabled drivers requiring a special evaluation of their driving ability will increase due to the ageing population, as well as the progress of adaptive technology. This places pressure on the development of the driving evaluation system. Despite quite intensive research there is still no consensus concerning what is the factual situation in a driver evaluation (methodology), which measures should be included in an evaluation (methods), and how an evaluation has to be carried out (practise). In order to find answers to these questions we carried out empirical studies, and simultaneously elaborated upon a conceptual model for driving and a driving evaluation. The findings of empirical studies can be condensed into the following points: 1) A driving ability defined by the on-road driving test is associated with different laboratory measures depending on the study groups. Faults in the laboratory tests predicted faults in the on-road driving test in the novice group, whereas slowness in the laboratory predicted driving faults in the experienced drivers group. 2) The Parkinson study clearly showed that even an experienced clinician cannot reliably accomplish an evaluation of a disabled person’s driving ability without collaboration with other specialists. 3) The main finding of the stroke study was that the use of a multidisciplinary team as a source of information harmonises the specialists’ evaluations. 4) The patient studies demonstrated that the disabled persons themselves, as well as their spouses, are as a rule not reliable evaluators. 5) From the safety point of view, perceptible operations with the control devices are not crucial, but correct mental actions which the driver carries out with the help of the control devices are of greatest importance. 6) Personality factors including higher-order needs and motives, attitudes and a degree of self-awareness, particularly a sense of illness, are decisive when evaluating a disabled person’s driving ability. Personality is also the main source of resources concerning compensations for lower-order physical deficiencies and restrictions. From work with the conceptual model we drew the following methodological conclusions: First, the driver has to be considered as a holistic subject of the activity, as a multilevel hierarchically organised system of an organism, a temperament, an individuality, and a personality where the personality is the leading subsystem from the standpoint of safety. Second, driving as a human form of a sociopractical activity, is also a hierarchically organised dynamic system. Third, in an evaluation of driving ability it is a question of matching these two hierarchically organised structures: a subject of an activity and a proper activity. Fourth, an evaluation has to be person centred but not disease-, function- or method centred. On the basis of our study a multidisciplinary team (practitioner, driving school teacher, psychologist, occupational therapist) is recommended for use in demanding driver evaluations. Primary in a driver’s evaluations is a coherent conceptual model while concrete methods of evaluations may vary. However, the on-road test must always be performed if possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drug Analysis without Primary Reference Standards: Application of LC-TOFMS and LC-CLND to Biofluids and Seized Material Primary reference standards for new drugs, metabolites, designer drugs or rare substances may not be obtainable within a reasonable period of time or their availability may also be hindered by extensive administrative requirements. Standards are usually costly and may have a limited shelf life. Finally, many compounds are not available commercially and sometimes not at all. A new approach within forensic and clinical drug analysis involves substance identification based on accurate mass measurement by liquid chromatography coupled with time-of-flight mass spectrometry (LC-TOFMS) and quantification by LC coupled with chemiluminescence nitrogen detection (LC-CLND) possessing equimolar response to nitrogen. Formula-based identification relies on the fact that the accurate mass of an ion from a chemical compound corresponds to the elemental composition of that compound. Single-calibrant nitrogen based quantification is feasible with a nitrogen-specific detector since approximately 90% of drugs contain nitrogen. A method was developed for toxicological drug screening in 1 ml urine samples by LC-TOFMS. A large target database of exact monoisotopic masses was constructed, representing the elemental formulae of reference drugs and their metabolites. Identification was based on matching the sample component s measured parameters with those in the database, including accurate mass and retention time, if available. In addition, an algorithm for isotopic pattern match (SigmaFit) was applied. Differences in ion abundance in urine extracts did not affect the mass accuracy or the SigmaFit values. For routine screening practice, a mass tolerance of 10 ppm and a SigmaFit tolerance of 0.03 were established. Seized street drug samples were analysed instantly by LC-TOFMS and LC-CLND, using a dilute and shoot approach. In the quantitative analysis of amphetamine, heroin and cocaine findings, the mean relative difference between the results of LC-CLND and the reference methods was only 11%. In blood specimens, liquid-liquid extraction recoveries for basic lipophilic drugs were first established and the validity of the generic extraction recovery-corrected single-calibrant LC-CLND was then verified with proficiency test samples. The mean accuracy was 24% and 17% for plasma and whole blood samples, respectively, all results falling within the confidence range of the reference concentrations. Further, metabolic ratios for the opioid drug tramadol were determined in a pharmacogenetic study setting. Extraction recovery estimation, based on model compounds with similar physicochemical characteristics, produced clinically feasible results without reference standards.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Primary pulmonary hypertension (PPH), or according to the recent classification idiopathic pulmonary hypertension (IPAH), is a rare, progressive disease of pulmonary vasculature leading to pulmonary hypertension and right heart failure. Most of the patients are sporadic but in about 6% of cases the disease is familial (FPPH). In 2000 two different groups identified the gene predisposing to PPH. This gene, Bone morphogenetic protein receptor type 2 (BMPR2), encodes a subunit of transforming growth factor β (TGF-β) receptor complex. There is a genetic connection between PPH and hereditary hemorrhagic telangiectasia (HHT), a bleeding disorder characterized by local telangiectasias and sometimes with pulmonary hypertension. In HHT, mutations in ALK1 (activin like kinase type 1) and Endoglin, another members of the TGF-β signaling pathway are found. In this study we identified all of the Finnish PPH patients for the years 1986-1999 using the hospital discharge registries of Finnish university hospitals. During this period we found a total of 59 confirmed PPH patients: 55 sporadic and 4 familial representing 3 different families. In 1999 the prevalence of PPH was 5.8 per million and the annual incidence varied between 0.2-1.3 per million. Among 28 PPH patients studied, heterozygous BMPR2 mutations were found in 12% (3/26) of sporadic patients and in 33% of the PPH families (1/3). All the mutations found were different. Large deletions of BMPR2 were excluded by single-stranded chain polymomorphism analysis. As a candidate gene approach we also studied ALK1, Endoglin, Bone Morphogenetic Receptor Type IA (BMPR1A or ALK3), Mothers Against Decapentaplegic Homolog 4 (SMAD4) and Serotonine Transporter Gene (SLC6A4) using single-strand conformational polymorphism (SSCP) analysis and direct sequencing. Among patients and family members studied, we found two mutations in ALK1 in two unrelated samples. We also identified all the HHT patients treated at the Department of Otorhinolaryngology at Helsinki University Central Hospital between the years of 1990-2005 and 8 of the patients were studied for Endoglin and ALK1 mutations using direct sequencing. A total of seven mutations were found and all the mutations were different. The absence of a founder mutation in the Finnish population in both PPH and HHT was somewhat surprising. This suggests that the mutations of BMPR2, ALK1 and Endoglin are quite young and the older mutations have been lost due to repetitive genetic bottlenecks and/or negative selection. Also, other genes than BMPR2 may be involved in the pathogenesis of PPH. No founder mutations were found in PPH or HHT and thus no simple genetic test is available for diagnostics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Glaucoma is the second leading cause of blindness worldwide. It is a group of optic neuropathies, characterized by progressive optic nerve degeneration, excavation of the optic disc due to apoptosis of retinal ganglion cells and corresponding visual field defects. Open angle glaucoma (OAG) is a subtype of glaucoma, classified according to the age of onset into juvenile and adult- forms with a cut-off point of 40 years of age. The prevalence of OAG is 1-2% of the population over 40 years and increases with age. During the last decade several candidate loci and three candidate genes, myocilin (MYOC), optineurin (OPTN) and WD40-repeat 36 (WDR36), for OAG have been identified. Exfoliation syndrome (XFS), age, elevated intraocular pressure and genetic predisposition are known risk factors for OAG. XFS is characterized by accumulation of grayish scales of fibrillogranular extracellular material in the anterior segment of the eye. XFS is overall the most common identifiable cause of glaucoma (exfoliation glaucoma, XFG). In the past year, three single nucleotide polymorphisms (SNPs) on the lysyl oxidase like 1 (LOXL1) gene have been associated with XFS and XFG in several populations. This thesis describes the first molecular genetic studies of OAG and XFS/XFG in the Finnish population. The role of the MYOC and OPTN genes and fourteen candidate loci was investigated in eight Finnish glaucoma families. Both candidate genes and loci were excluded in families, further confirming the heterogeneous nature of OAG. To investigate the genetic basis of glaucoma in a large Finnish family with juvenile and adult onset OAG, we analysed the MYOC gene in family members. Glaucoma associated mutation (Thr377Met) was identified in the MYOC gene segregating with the disease in the family. This finding has great significance for the family and encourages investigating the MYOC gene also in other Finnish OAG families. In order to identify the genetic susceptibility loci for XFS, we carried out a genome-wide scan in the extended Finnish XFS family. This scan produced promising candidate locus on chromosomal region 18q12.1-21.33 and several additional putative susceptibility loci for XFS. This locus on chromosome 18 provides a solid starting point for the fine-scale mapping studies, which are needed to identify variants conferring susceptibility to XFS in the region. A case-control and family-based association study and family-based linkage study was performed to evaluate whether SNPs in the LOXL1 gene contain a risk for XFS, XFG or POAG in the Finnish patients. A significant association between the LOXL1 gene SNPs and XFS and XFG was confirmed in the Finnish population. However, no association was detected with POAG. Probably also other genetic and environmental factors are involved in the pathogenesis of XFS and XFG.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Environmentally benign and economical methods for the preparation of industrially important hydroxy acids and diacids were developed. The carboxylic acids, used in polyesters, alkyd resins, and polyamides, were obtained by the oxidation of the corresponding alcohols with hydrogen peroxide or air catalyzed by sodium tungstate or supported noble metals. These oxidations were carried out using water as a solvent. The alcohols are also a useful alternative to the conventional reactants, hydroxyaldehydes and cycloalkanes. The oxidation of 2,2-disubstituted propane-1,3-diols with hydrogen peroxide catalyzed by sodium tungstate afforded 2,2-disubstituted 3-hydroxypropanoic acids and 1,1-disubstituted ethane-1,2-diols as products. A computational study of the Baeyer-Villiger rearrangement of the intermediate 2,2-disubstituted 3-hydroxypropanals gave in-depth data of the mechanism of the reaction. Linear primary diols having chain length of at least six carbons were easily oxidized with hydrogen peroxide to linear dicarboxylic acids catalyzed by sodium tungstate. The Pt/C catalyzed air oxidation of 2,2-disubstituted propane-1,3-diols and linear primary diols afforded the highest yield of the corresponding hydroxy acids, while the Pt, Bi/C catalyzed oxidation of the diols afforded the highest yield of the corresponding diacids. The mechanism of the promoted oxidation was best described by the ensemble effect, and by the formation of a complex of the hydroxy and the carboxy groups of the hydroxy acids with bismuth atoms. The Pt, Bi/C catalyzed air oxidation of 2-substituted 2-hydroxymethylpropane-1,3-diols gave 2-substituted malonic acids by the decarboxylation of the corresponding triacids. Activated carbon was the best support and bismuth the most efficient promoter in the air oxidation of 2,2-dialkylpropane-1,3-diols to diacids. In oxidations carried out in organic solvents barium sulfate could be a valuable alternative to activated carbon as a non-flammable support. In the Pt/C catalyzed air oxidation of 2,2-disubstituted propane-1,3-diols to 2,2-disubstituted 3-hydroxypropanoic acids the small size of the 2-substituents enhanced the rate of the oxidation. When the potential of platinum of the catalyst was not controlled, the highest yield of the diacids in the Pt, Bi/C catalyzed air oxidation of 2,2-dialkylpropane-1,3-diols was obtained in the regime of mass transfer. The most favorable pH of the reaction mixture of the promoted oxidation was 10. The reaction temperature of 40°C prevented the decarboxylation of the diacids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Determination of the environmental factors controlling earth surface processes and landform patterns is one of the central themes in physical geography. However, the identification of the main drivers of the geomorphological phenomena is often challenging. Novel spatial analysis and modelling methods could provide new insights into the process-environment relationships. The objective of this research was to map and quantitatively analyse the occurrence of cryogenic phenomena in subarctic Finland. More precisely, utilising a grid-based approach the distribution and abundance of periglacial landforms were modelled to identify important landscape scale environmental factors. The study was performed using a comprehensive empirical data set of periglacial landforms from an area of 600 km2 at a 25-ha resolution. The utilised statistical methods were generalized linear modelling (GLM) and hierarchical partitioning (HP). GLMs were used to produce distribution and abundance models and HP to reveal independently the most likely causal variables. The GLM models were assessed utilising statistical evaluation measures, prediction maps, field observations and the results of HP analyses. A total of 40 different landform types and subtypes were identified. Topographical, soil property and vegetation variables were the primary correlates for the occurrence and cover of active periglacial landforms on the landscape scale. In the model evaluation, most of the GLMs were shown to be robust although the explanation power, prediction ability as well as the selected explanatory variables varied between the models. The great potential of the combination of a spatial grid system, terrain data and novel statistical techniques to map the occurrence of periglacial landforms was demonstrated in this study. GLM proved to be a useful modelling framework for testing the shapes of the response functions and significances of the environmental variables and the HP method helped to make better deductions of the important factors of earth surface processes. Hence, the numerical approach presented in this study can be a useful addition to the current range of techniques available to researchers to map and monitor different geographical phenomena.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Industrial ecology is an important field of sustainability science. It can be applied to study environmental problems in a policy relevant manner. Industrial ecology uses ecosystem analogy; it aims at closing the loop of materials and substances and at the same time reducing resource consumption and environmental emissions. Emissions from human activities are related to human interference in material cycles. Carbon (C), nitrogen (N) and phosphorus (P) are essential elements for all living organisms, but in excess have negative environmental impacts, such as climate change (CO2, CH4 N2O), acidification (NOx) and eutrophication (N, P). Several indirect macro-level drivers affect emissions change. Population and affluence (GDP/capita) often act as upward drivers for emissions. Technology, as emissions per service used, and consumption, as economic intensity of use, may act as drivers resulting in a reduction in emissions. In addition, the development of country-specific emissions is affected by international trade. The aim of this study was to analyse changes in emissions as affected by macro-level drivers in different European case studies. ImPACT decomposition analysis (IPAT identity) was applied as a method in papers I III. The macro-level perspective was applied to evaluate CO2 emission reduction targets (paper II) and the sharing of greenhouse gas emission reduction targets (paper IV) in the European Union (EU27) up to the year 2020. Data for the study were mainly gathered from official statistics. In all cases, the results were discussed from an environmental policy perspective. The development of nitrogen oxide (NOx) emissions was analysed in the Finnish energy sector during a long time period, 1950 2003 (paper I). Finnish emissions of NOx began to decrease in the 1980s as the progress in technology in terms of NOx/energy curbed the impact of the growth in affluence and population. Carbon dioxide (CO2) emissions related to energy use during 1993 2004 (paper II) were analysed by country and region within the European Union. Considering energy-based CO2 emissions in the European Union, dematerialization and decarbonisation did occur, but not sufficiently to offset population growth and the rapidly increasing affluence during 1993 2004. The development of nitrogen and phosphorus load from aquaculture in relation to salmonid consumption in Finland during 1980 2007 was examined, including international trade in the analysis (paper III). A regional environmental issue, eutrophication of the Baltic Sea, and a marginal, yet locally important source of nutrients was used as a case. Nutrient emissions from Finnish aquaculture decreased from the 1990s onwards: although population, affluence and salmonid consumption steadily increased, aquaculture technology improved and the relative share of imported salmonids increased. According to the sustainability challenge in industrial ecology, the environmental impact of the growing population size and affluence should be compensated by improvements in technology (emissions/service used) and with dematerialisation. In the studied cases, the emission intensity of energy production could be lowered for NOx by cleaning the exhaust gases. Reorganization of the structure of energy production as well as technological innovations will be essential in lowering the emissions of both CO2 and NOx. Regarding the intensity of energy use, making the combustion of fuels more efficient and reducing energy use are essential. In reducing nutrient emissions from Finnish aquaculture to the Baltic Sea (paper III) through technology, limits of biological and physical properties of cultured fish, among others, will eventually be faced. Regarding consumption, salmonids are preferred to many other protein sources. Regarding trade, increasing the proportion of imports will outsource the impacts. Besides improving technology and dematerialization, other viewpoints may also be needed. Reducing the total amount of nutrients cycling in energy systems and eventually contributing to NOx emissions needs to be emphasized. Considering aquaculture emissions, nutrient cycles can be partly closed through using local fish as feed replacing imported feed. In particular, the reduction of CO2 emissions in the future is a very challenging task when considering the necessary rates of dematerialisation and decarbonisation (paper II). Climate change mitigation may have to focus on other greenhouse gases than CO2 and on the potential role of biomass as a carbon sink, among others. The global population is growing and scaling up the environmental impact. Population issues and growing affluence must be considered when discussing emission reductions. Climate policy has only very recently had an influence on emissions, and strong actions are now called for climate change mitigation. Environmental policies in general must cover all the regions related to production and impacts in order to avoid outsourcing of emissions and leakage effects. The macro-level drivers affecting changes in emissions can be identified with the ImPACT framework. Statistics for generally known macro-indicators are currently relatively well available for different countries, and the method is transparent. In the papers included in this study, a similar method was successfully applied in different types of case studies. Using transparent macro-level figures and a simple top-down approach are also appropriate in evaluating and setting international emission reduction targets, as demonstrated in papers II and IV. The projected rates of population and affluence growth are especially worth consideration in setting targets. However, sensitivities in calculations must be carefully acknowledged. In the basic form of the ImPACT model, the economic intensity of consumption and emission intensity of use are included. In seeking to examine consumption but also international trade in more detail, imports were included in paper III. This example demonstrates well how outsourcing of production influences domestic emissions. Country-specific production-based emissions have often been used in similar decomposition analyses. Nevertheless, trade-related issues must not be ignored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The first aim of the current study was to evaluate the survival of total hip arthroplasty (THA) in patients aged 55 years and older on a nation-wide level. The second aim was to evaluate, on a nation wide-basis, the geographical variation of the incidence of primary THA for primary OA and also to identify those variables that are possibly associated with this variation. The third aim was to evaluate the effects of hospital volume: on the length of stay, on the numbers of re-admissions and on the numbers of complications of THR on population-based level in Finland. The survival of implants was analysed based on data from the Finnish Arthroplasty Register. The incidence and hospital volume data were obtained from the Hospital Discharge Register. Cementless total hip replacements had a significantly reduced risk of revision for aseptic loosening compared with cemented hip replacements. When revision for any reason was the end point in the survival analyses, there were no significant differences found between the groups. Adjusted incidence ratios of THA varied from 1.9- to 3.0-fold during the study period. Neither the average income within a region nor the morbidity index was associated with the incidence of THA. For the four categories of volume of total hip replacements performed per hospital, the length of the surgical treatment period was shorter for the highest volume group than for the lowest volume group. The odds ratio for dislocations was significantly lower in the high volume group than in the low volume group. In patients who were 55 years of age or older, the survival of cementless total hip replacements was as good as that of the cemented replacements. However, multiple wear-related revisions of the cementless cups indicate that excessive polyethylene wear was a major clinical problem with modular cementless cups. The variation in the long-term rates of survival for different cemented stems was considerable. Cementless proximal porous-coated stems were found to be a good option for elderly patients. When hip surgery was performed on with a large repertoire, the indications to perform THAs due to primary OA were tight. Socio-economic status of the patient had no apparent effect on THA rate. Specialization of hip replacements in high volume hospitals should reduce costs by significantly shortening the length of stay, and may reduce the dislocation rate.