40 resultados para derivation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work described in this thesis is the development of an ultrasonic tomogram to provide outlines of cross-sections of the ulna in vivo. This instrument, used in conjunction with X-ray densitometry previously developed in this department, would provide actual bone mineral density to a high resolution. It was hoped that the accuracy of the plot obtained from the tomogram would exceed that of existing ultrasonic techniques by about five times. Repeat measurements with these instruments to follow bone mineral changes would involve very low X-ray doses. A theoretical study has been made of acoustic diffraction, using a geometrical transform applicable to the integration of three different Green's functions, for axisymmetric systems. This has involved the derivation of one of these in a form amenable to computation. It is considered that this function fits the boundary conditions occurring in medical ultrasonography more closely than those used previously. A three dimensional plot of the pressure field using this function has been made for a ring transducer, in addition to that for disc transducers using all three functions. It has been shown how the theory may be extended to investigate the nature and magnitude of the particle velocity, at any point in the field, for the three functions mentioned. From this study. a concept of diffraction fronts has been developed, which has made it possible to determine energy flow also in a diffracting system. Intensity has been displayed in a manner similar to that used for pressure. Plots have been made of diffraction fronts and energy flow direction lines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent and potential changes in technology have resulted in the anticipation of increases in the frequency of job changes. This has led manpower policy makers to investigate the feasibility of incorporating the employment skills of job groups in the general prediction of future job learning and performance with a view to the establishment of "job families" within which transfer might be considered reciprocally high. A structured job analysis instrument (the Position Analysis Questionnaire) is evaluated in terms of two distinct sets of scores; job dimensions and synthetically established attribute/trait profiles. Studies demonstrate that estimates of a job's structure/dimensions and requisite human attributes can be reliably established. Three alternative techniques of statistically assembling profiles of the requisite human attributes for jobs are found to have differential levels of reliability and differential degrees of validity in their estimation of the "actual" ability requirements of jobs. The utility of these two sets of job descriptors to serve as representations of the cognitive structure similarity of job groups is investigated in a study which simulates a job transfer situation. The central role of the index of similarity used to assess the relationship between "target" and "present" job is demonstrated. The relative extents to which job structure similarity and job attribute similariity are associated with positive transfer are investigated. The studies demonstrate that the dimensions of jobs, and more fruitfully their requisite human attributes can serve as bases to predict job transfer learning and performance. The nature of the index of similarity used to optimally formulate predictions of transfer is such that networks of jobs might be establishable to which current job incumbents could be expected to transfer positively. The derivation of "job families" with anticipated reciprocal transfer consequences is considered to be less appropriate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The promoters of the large groundwater developments implemented in the 1970's paid little attention to the effects of pumping on soil moisture. A field study, conducted in 1979 in the Tern Area of the Shropshire Groundwater Scheme, revealed that significant quantities of the available moisture could be removed from the root zone of vegetation when drawdown of shallow watertables occurred. Arguments to this effect, supported by the field study evidence, were successfully presented at the Shropshire Groundwater Scheme public inquiry. The aim of this study has been to expand the work which was undertaken in connection with the Shropshire Groundwater Scheme, and to develop a method whereby the effects of groundwater pumping on vegetation can be assessed, and hence the impacts minimised. Two concepts, the critical height and the soil sensitivity depth, formulated during the initial work are at the core of the Environmental Impact Assessment method whose development is described. A programme of laboratory experiments on soil columns is described, as is the derivation of relationships for determining critical heights and field capacity moisture profiles. These relationships are subsequently employed in evaluating the effects of groundwater drawdown. In employing the environmental assessment technique, digitised maps of relevant features of the Tern Area are combined to produce composite maps delineating the extent of the areas which are potentially sensitive to groundwater drawdown. A series of crop yield/moisture loss functions are then employed to estimate the impact of simulated pumping events on the agricultural community of the Tern Area. Finally, guidelines, based on experience gained through evaluation of the Tern Area case study, are presented for use in the design of soil moisture monitoring systems and in the siting of boreholes. In addition recommendations are made for development of the EIA technique, and further research needs are identified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Priestlaw and Cockburn Law intrusions are zoned granitoid plutons intruded into Lower Palaeozoic sediments at the margin of, and prior to closure of, the Iapetus Ocean. They vary from marginal basic rocks to more acid rocks towards their centres. The parental magmas to the plutons were derived from an isotopically depleted mantle modified by melts/fluids during subduction. Zonation in the plutons was caused by combined assimilation and fractional crystallisation (AFC), and rates of assimilation were low relative to rates of fractionation. A series of pyroxene-mica diorites in Priestlaw are however hybrids formed by simple mixing. Porphyrite-acid porphyrite dykes, associated with the plutons, represent chilled portions of the pluton magmas; more evolved quartz porphyry dykes represent crustal melts. Lamprophyre dykes have high LILE and LREE abundances and relative depletions of HFS elements, typical of subduction related ultra-potassic magmas. High Mg numbers, Ni and Cr contents and experimental constraints, imply near primary status for the least evolved lamprophyres. Their enrichments in incompatible elements, high La/Nb, La/Yb, Sr and low Nd indicate derivation from a previously metasomatised mantle source. Granitoid plutons and lavas in the northern Southern Uplands have high Nd and low Sr, whereas the younger plutons of the southern Southern Uplands have higher Sr, La/Yb and lower Nd, consistent with derivation from a more enriched source. No plutons however have remained as closed systems. Three magmatic suites are present in southern Scotland: (1) Midland Valley Suite (2) Northern Southern Uplands Suite and (3) Southern Southern Uplands Suite, consistent with previous models indicating northward underthrusting of English lithosphere below the southern Southern Uplands. Further underthrusting of decoupled lithospheric mantle is indicated by the presence of lamorophyres in the eastern Southern Uplands, and took place between 410 Ma and 400 Ma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Continental red bed sequences are host, on a worldwide scale, to a characteristic style of mineralisation which is dominated by copper, lead, zinc, uranium and vanadium. This study examines the features of sediment-hosted ore deposits in the Permo-Triassic basins of Western Europe, with particular reference to the Cu-Pb-Zn-Ba mineralisation in the Cheshire Basin, northwest England, the Pb-Ba-F deposits of the Inner Moray Firth Basin, northeast Scotland, and the Pb-rich deposits of the Eifel and Oberpfalz regions, West Germany. The deposits occur primarily but not exclusively in fluvial and aeolian sandstones on the margins of deep, avolcanic sedimentary basins containing red beds, evaporites and occasionally hydrocarbons. The host sediments range in age from Permian to Rhaetian and often contain (or can be inferred to have originally contained) organic matter. Textural studies have shown that early diagenetic quartz overgrowths precede the main episode of sulphide deposition. Fluid inclusion and sulphur isotope data have significantly constrained the genetic hypotheses for the mineralisation and a model involving the expulsion of diagenetic fluids and basinal brines up the faulted margins of sedimentary basins is favoured. Consideration of the development of these sedimentary basins suggests that ore emplacement occurred during the tectonic stage of basin evolution or during basin inversion in the Tertiary. ð34S values for barite in the Cheshire Basin range from 13.8% to 19.3% and support the theory that the Upper Triassic evaporites were the principal sulphur source for the mineralisation and provided the means by which mineralising fluids became saline. In contrast, δ34S values for barite in the Inner Moray Firth Basin (mean δ34S = + 29%) are not consistent with simple derivation of sulphur from the evaporite horizons in the basin and it is likely that sulphur-rich Jurassic shales supplied the sulphur for the mineralisation at Elgin. Possible sources of sulphur for the mineralisation in West Germany include hydrothermal vein sulphides in the underlying Devonian sediments and evaporites in the overlying Muschelkalk. Textural studies of the deeply buried sandstones in the Cheshire Basin reveal widespread dissolution and replacement of detrital phases and support the theory that red bed diagenetic processes are responsible for the release of metals into pore fluids. The ore solutions are envisaged as being warm (60-150%C), saline (9-22 wt % equiv NaCl) fluids in which metals were transported as chloride complexes. The distribution of δ34S values for sulphides in the Cheshire Basin (-1.8% to + 16%), the Moray Firth Basin (-4.8% to + 27%) and the German Permo-Triassic Basins (-22.2% to -12.2%) preclude a magmatic source for the sulphides and support the contention that sulphide precipitation is thought to result principally from sulphate reduction processes, although a decrease in temperature of the ore fluid or reaction with carbonates may also be important. Methane is invoked as the principal reducing agent in the Cheshire Basin, whilst terrestrial organic debris and bacterial reduction processes are thought to have played a major part in the genesis of the German ore deposits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A critical review of previous research revealed that visual attention tests, such as the Useful Field of View (UFOV) test, provided the best means of detecting age-related changes to the visual system that could potentially increase crash risk. However, the question was raised as to whether the UFOV, which was regarded as a static visual attention test, could be improved by inclusion of kinetic targets that more closely represent the driving task. A computer program was written to provide more information about the derivation of UFOV test scores. Although this investigation succeeded in providing new information, some of the commercially protected UFOV test procedures still remain unknown. Two kinetic visual attention tests (DRTS1 and 2), developed at Aston University to investigate inclusion of kinetic targets in visual attention tests, were introduced. The UFOV was found to be more repeatable than either of the kinetic visual attention tests and learning effects or age did not influence these findings. Determinants of static and kinetic visual attention were explored. Increasing target eccentricity led to reduced performance on the UFOV and DRTS1 tests. The DRTS2 was not affected by eccentricity but this may have been due to the style of presentation of its targets. This might also have explained why only the DRTS2 showed laterality effects (i.e. better performance to targets presented on the left hand side of the road). Radial location, explored using the UFOV test, showed that subjects responded best to targets positioned to the horizontal meridian. Distraction had opposite effects on static and kinetic visual attention. While UFOV test performance declined with distraction, DRTS1 performance increased. Previous research had shown that this striking difference was to be expected. Whereas the detection of static targets is attenuated in the presence of distracting stimuli, distracting stimuli that move in a structured flow field enhances the detection of moving targets. Subjects reacted more slowly to kinetic compared to static targets, longitudinal motion compared to angular motion and to increased self-motion. However, the effects of longitudinal motion, angular motion, self-motion and even target eccentricity were caused by target edge speed variations arising because of optic flow field effects. The UFOV test was more able to detect age-related changes to the visual system than were either of the kinetic visual attention tests. The driving samples investigated were too limited to draw firm conclusions. Nevertheless, the results presented showed that neither the DRTS2 nor the UFOV tests were powerful tools for the identification of drivers prone to crashes or poor driving performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Octopus Automated Perimeter was validated in a comparative study and found to offer many advantages in the assessment of the visual field. The visual evoked potential was investigated in an extensive study using a variety of stimulus parameters to simulate hemianopia and central visual field defects. The scalp topography was recorded topographically and a technique to compute the source derivation of the scalp potential was developed. This enabled clarification of the expected scalp distribution to half field stimulation using different electrode montages. The visual evoked potential following full field stimulation was found to be asymmetrical around the midline with a bias over the left occiput particularly when the foveal polar projections of the occipital cortex were preferentially stimulated. The half field response reflected the distribution asymmetry. Masking of the central 3° resulted in a response which was approximately symmetrical around the midline but there was no evidence of the PNP-complex. A method for visual field quantification was developed based on the neural representation of visual space (Drasdo and Peaston 1982) in an attempt to relate visual field depravation with the resultant visual evoked potentials. There was no form of simple, diffuse summation between the scalp potential and the cortical generators. It was, however, possible to quantify the degree of scalp potential attenuation for M-scaled full field stimuli. The results obtained from patients exhibiting pre-chiasmal lesions suggested that the PNP-complex is not scotomatous in nature but confirmed that it is most likely to be related to specific diseases (Harding and Crews 1982). There was a strong correlation between the percentage information loss of the visual field and the diagnostic value of the visual evoked potential in patients exhibiting chiasmal lesions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relationship between accommodation and intraocular pressure (lOP) has not been addressed as a research question for over 20 years, when measurement of both of these parameters was less advanced than today. Hence the central aim of this thesis was to evaluate the effects of accommodation on lOP. The instrument of choice throughout this thesis was the Pulsair EasyEye non-contact tonometer (NCT) due principally to its slim-line design which allowed the measurement of lOP in one eye and simultaneous stimulation of accommodation in the other eye. A second reason for using the Pulsair EasyEye NCT was that through collaboration with the manufacturers (Keeler, UK) the instrument's operational technology was made accessible. Hence, the principle components underpinning non-contact lOP measures of 0.1mmHg resolution (an order of magnitude greater than other methods) were made available. The relationship between the pressure-output and corneal response has been termed the pressure-response relationship, aspects of which have been shown to be related to ocular biometric parameters. Further, analysis of the components of the pressure-response relationship together with high-speed photography of the cornea during tonometry has enhanced our understanding of the derivation of an lOP measure with the Pulsair EasyEye NCT. The NCT samples the corneal response to the pressure pulse over a 19 ms cycle photoelectronically, but computes the subject's lOP using the data collected in the first 2.34 ms. The relatively instantaneous nature of the lOP measurement renders the measures susceptible to variations in the steady-state lOP caused by the respiratory and cardiac cycles. As such, the variance associated with these cycles was minimised by synchronising the lOP measures with the cardiac trace and maintaining a constant pace respiratory cycle at 15 breathes/minute. It is apparent that synchronising the lOP measures with the peak, middle or trough of the cardiac trace significantly reduced the spread of consecutive measures. Of the 3 locations investigated, synchronisation with the middle location demonstrated the least variance (coeflicient of variation = 9.1%) and a strong correlation (r = 0.90, p = <0.001) with lOP values obtained with Goldmann contact tonometry (n = 50). Accordingly lOP measures synchronised with the middle location of the cardiac cycle were taken in the RE while the LE fixated low (L; zero D), intermediate (I; 1.50 D) and high (H; 4 D) accommodation targets, Quasi-continuous measures of accommodation responses were obtained during the lOP measurement period using the portable infrared Grand Seiko FR-5000 autorefractor. The lOP reduced between L and I accommodative levels by approximately 0.61 mmHg (p <0.00 I). No significant reduction in IOP between L and H accommodation levels was elicited (p = 0.65) (n = 40). The relationship between accommodation and lOP was characterised by substantial inter-subject variations. Myopes demonstrated a tendency to show a reduction in IOP with accommodation which was significant only with I accommodation levels when measured with the NCT (r = 0.50, p = 0.01). However, the relationship between myopia and lOP change with accommodation reached significance for both I (r = 0.61, p= 0.003) and H (r = 0.531, p= 0.0 1) accommodation levels when measured with the Ocular blood Flow Analyser (OBFA). Investigation of the effects of accommodation on the parameters measured by the OBFA demonstrated that with H accommodation levels the pulse amplitude (PA) and pulse rate (PR) responses differed between myopes and emmetropes (PA: p = 0.03; PR: p = 0.004). As thc axial length increased there was a tendency for the pulsatile ocular blood flow (POBF) to reduce with accommodation, which was significant only with H accommodation levels (r = 0.38, p = 0.02). It is proposed that emmetropes arc able to regulate the POBF responses to changes in ocular perfusion pressure caused by changes in lOP with I (r = 0.77, p <0.001) and H (r = 0.73, p = 0.001) accommodation levels. However, thc relationship between lOP and POBF changes in the myopes was not correlated for both I (r = 0.33, p = 0.20) and H (r = 0.05, p = 0.85) accommodation levels. The thesis presents new data on the relationships between accommodation, lOP and parameters of the OBFA,: and provides evidence for possible lOP and choroidal blood flow regulatory mechanisms. Further the data highlight possible deficits in the vascular regulation of the myopic eye during accommodation, which may play a putative role in the aetiology of myopia development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research is concerned with the measurement of residents' evaluations of the environmental quality of residential areas. The research reflects the increased attention being given to residents' values in planning decisions affecting the residential environment. The work was undertaken in co-operation with a local authority which was in the process of revising its housing strategy, and in particular the priorities for improvement action. The study critically examines the existing evidence on environmental values and their relationship to the environment and points to a number of methodological and conceptual deficiencies. The research strategy developed on the basis of the research review was constrained by the need to keep any survey methods simple so that they could easily be repeated, when necessary, by the sponsoring authority. A basic perception model was assumed, and a social survey carried out to measure residents' responses to different environmental conditions. The data was only assumed to have ordinal properties, necessitating the extensive use of non-parametric statistics. Residents' expressions of satisfaction with the component elements of the environment (ranging from convenience to upkeep and privacy) were successfully related to 'objective' measures of the environment. However the survey evidence did not justify the use of the 'objective' variables as environmental standards. A method of using the social survey data directly as an aid to decision-making is discussed. Alternative models of the derivation of overall satisfaction with the environment are tested, and the values implied by the additive model compared with residents' preferences as measured directly in the survey. Residents' overall satisfactions with the residential environment were most closely related to their satisfactions with the "Appearance" and the "Reputation" of their areas. By contrast the most important directly measured preference was "Friendliness of area". The differences point to the need to define concepts used in social research clearly in operational terms, and to take care in the use of values 'measured' by different methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research investigated expertise in hazardous substance risk assessment (HSRA). Competent pro-active risk assessment is needed to prevent occupational ill-health caused by hazardous substance exposure occurring in the future. In recent years there has been a strong demand for HSRA expertise and a shortage of expert practitioners. The discipline of Occupational Hygiene was identified as the key repository of knowledge and skills for HSRA and one objective of this research was to develop a method to elicit this expertise from experienced occupational hygienists. In the study of generic expertise, many methods of knowledge elicitation (KE) have been investigated, since this has been relevant to the development of 'expert systems' (thinking computers). Here, knowledge needed to be elicited from human experts, and this stage was often a bottleneck in system development, since experts could not explain the basis of their expertise. At an intermediate stage, information collected was used to structure a basic model of hazardous substance risk assessment activity (HSRA Model B) and this formed the basis of tape transcript analysis in the main study with derivation of a 'classification' and a 'performance matrix'. The study aimed to elicit the expertise of occupational hygienists and compare their performance with other health and safety professionals (occupational health physicians, occupational health nurses, health and safety practitioners and trainee health and safety inspectors), as evaluated using the matrix. As a group, the hygienists performed best in the exercise, and this group were particularly good at process elicitation and at recommending specific control measures, although the other groups also performed well in selected aspects of the matrix and the work provided useful findings and insights. From the research, two models of HSRA have been derived, an HSRA aid, together with a novel videotape KE technique and interesting research findings. The implications of this are discussed with respect to future training of HS professionals and wider application of the videotape KE method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Time after time… and aspect and mood. Over the last twenty five years, the study of time, aspect and - to a lesser extent - mood acquisition has enjoyed increasing popularity and a constant widening of its scope. In such a teeming field, what can be the contribution of this book? We believe that it is unique in several respects. First, this volume encompasses studies from different theoretical frameworks: functionalism vs generativism or function-based vs form-based approaches. It also brings together various sub-fields (first and second language acquisition, child and adult acquisition, bilingualism) that tend to evolve in parallel rather than learn from each other. A further originality is that it focuses on a wide range of typologically different languages, and features less studied languages such as Korean and Bulgarian. Finally, the book gathers some well-established scholars, young researchers, and even research students, in a rich inter-generational exchange, that ensures the survival but also the renewal and the refreshment of the discipline. The book at a glance The first part of the volume is devoted to the study of child language acquisition in monolingual, impaired and bilingual acquisition, while the second part focuses on adult learners. In this section, we will provide an overview of each chapter. The first study by Aviya Hacohen explores the acquisition of compositional telicity in Hebrew L1. Her psycholinguistic approach contributes valuable data to refine theoretical accounts. Through an innovating methodology, she gathers information from adults and children on the influence of definiteness, number, and the mass vs countable distinction on the constitution of a telic interpretation of the verb phrase. She notices that the notion of definiteness is mastered by children as young as 10, while the mass/count distinction does not appear before 10;7. However, this does not entail an adult-like use of telicity. She therefore concludes that beyond definiteness and noun type, pragmatics may play an important role in the derivation of Hebrew compositional telicity. For the second chapter we move from a Semitic language to a Slavic one. Milena Kuehnast focuses on the acquisition of negative imperatives in Bulgarian, a form that presents the specificity of being grammatical only with the imperfective form of the verb. The study examines how 40 Bulgarian children distributed in two age-groups (15 between 2;11-3;11, and 25 between 4;00 and 5;00) develop with respect to the acquisition of imperfective viewpoints, and the use of imperfective morphology. It shows an evolution in the recourse to expression of force in the use of negative imperatives, as well as the influence of morphological complexity on the successful production of forms. With Yi-An Lin’s study, we concentrate both on another type of informant and of framework. Indeed, he studies the production of children suffering from Specific Language Impairment (SLI), a developmental language disorder the causes of which exclude cognitive impairment, psycho-emotional disturbance, and motor-articulatory disorders. Using the Leonard corpus in CLAN, Lin aims to test two competing accounts of SLI (the Agreement and Tense Omission Model [ATOM] and his own Phonetic Form Deficit Model [PFDM]) that conflicts on the role attributed to spellout in the impairment. Spellout is the point at which the Computational System for Human Language (CHL) passes over the most recently derived part of the derivation to the interface components, Phonetic Form (PF) and Logical Form (LF). ATOM claims that SLI sufferers have a deficit in their syntactic representation while PFDM suggests that the problem only occurs at the spellout level. After studying the corpus from the point of view of tense / agreement marking, case marking, argument-movement and auxiliary inversion, Lin finds further support for his model. Olga Gupol, Susan Rohstein and Sharon Armon-Lotem’s chapter offers a welcome bridge between child language acquisition and multilingualism. Their study explores the influence of intensive exposure to L2 Hebrew on the development of L1 Russian tense and aspect morphology through an elicited narrative. Their informants are 40 Russian-Hebrew sequential bilingual children distributed in two age groups 4;0 – 4;11 and 7;0 - 8;0. They come to the conclusion that bilingual children anchor their narratives in perfective like monolinguals. However, while aware of grammatical aspect, bilinguals lack the full form-function mapping and tend to overgeneralize the imperfective on the principles of simplicity (as imperfective are the least morphologically marked forms), universality (as it covers more functions) and interference. Rafael Salaberry opens the second section on foreign language learners. In his contribution, he reflects on the difficulty L2 learners of Spanish encounter when it comes to distinguishing between iterativity (conveyed with the use of the preterite) and habituality (expressed through the imperfect). He examines in turn the theoretical views that see, on the one hand, habituality as part of grammatical knowledge and iterativity as pragmatic knowledge, and on the other hand both habituality and iterativity as grammatical knowledge. He comes to the conclusion that the use of preterite as a default past tense marker may explain the impoverished system of aspectual distinctions, not only at beginners but also at advanced levels, which may indicate that the system is differentially represented among L1 and L2 speakers. Acquiring the vast array of functions conveyed by a form is therefore no mean feat, as confirmed by the next study. Based on the prototype theory, Kathleen Bardovi-Harlig’s chapter focuses on the development of the progressive in L2 English. It opens with an overview of the functions of the progressive in English. Then, a review of acquisition research on the progressive in English and other languages is provided. The bulk of the chapter reports on a longitudinal study of 16 learners of L2 English and shows how their use of the progressive expands from the prototypical uses of process and continuousness to the less prototypical uses of repetition and future. The study concludes that the progressive spreads in interlanguage in accordance with prototype accounts. However, it suggests additional stages, not predicted by the Aspect Hypothesis, in the development from activities and accomplishments at least for the meaning of repeatedness. A similar theoretical framework is adopted in the following chapter, but it deals with a lesser studied language. Hyun-Jin Kim revisits the claims of the Aspect Hypothesis in relation to the acquisition of L2 Korean by two L1 English learners. Inspired by studies on L2 Japanese, she focuses on the emergence and spread of the past / perfective marker ¬–ess- and the progressive – ko iss- in the interlanguage of her informants throughout their third and fourth semesters of study. The data collected through six sessions of conversational interviews and picture description tasks seem to support the Aspect Hypothesis. Indeed learners show a strong association between past tense and accomplishments / achievements at the start and a gradual extension to other types; a limited use of past / perfective marker with states and an affinity of progressive with activities / accomplishments and later achievements. In addition, - ko iss– moves from progressive to resultative in the specific category of Korean verbs meaning wear / carry. While the previous contributions focus on function, Evgeniya Sergeeva and Jean-Pierre Chevrot’s is interested in form. The authors explore the acquisition of verbal morphology in L2 French by 30 instructed native speakers of Russian distributed in a low and high levels. They use an elicitation task for verbs with different models of stem alternation and study how token frequency and base forms influence stem selection. The analysis shows that frequency affects correct production, especially among learners with high proficiency. As for substitution errors, it appears that forms with a simple structure are systematically more frequent than the target form they replace. When a complex form serves as a substitute, it is more frequent only when it is replacing another complex form. As regards the use of base forms, the 3rd person singular of the present – and to some extent the infinitive – play this role in the corpus. The authors therefore conclude that the processing of surface forms can be influenced positively or negatively by the frequency of the target forms and of other competing stems, and by the proximity of the target stem to a base form. Finally, Martin Howard’s contribution takes up the challenge of focusing on the poorer relation of the TAM system. On the basis of L2 French data obtained through sociolinguistic interviews, he studies the expression of futurity, conditional and subjunctive in three groups of university learners with classroom teaching only (two or three years of university teaching) or with a mixture of classroom teaching and naturalistic exposure (2 years at University + 1 year abroad). An analysis of relative frequencies leads him to suggest a continuum of use going from futurate present to conditional with past hypothetic conditional clauses in si, which needs to be confirmed by further studies. Acknowledgements The present volume was inspired by the conference Acquisition of Tense – Aspect – Mood in First and Second Language held on 9th and 10th February 2008 at Aston University (Birmingham, UK) where over 40 delegates from four continents and over a dozen countries met for lively and enjoyable discussions. This collection of papers was double peer-reviewed by an international scientific committee made of Kathleen Bardovi-Harlig (Indiana University), Christine Bozier (Lund Universitet), Alex Housen (Vrije Universiteit Brussel), Martin Howard (University College Cork), Florence Myles (Newcastle University), Urszula Paprocka (Catholic University of Lublin), †Clive Perdue (Université Paris 8), Michel Pierrard (Vrije Universiteit Brussel), Rafael Salaberry (University of Texas at Austin), Suzanne Schlyter (Lund Universitet), Richard Towell (Salford University), and Daniel Véronique (Université d’Aix-en-Provence). We are very much indebted to that scientific committee for their insightful input at each step of the project. We are also thankful for the financial support of the Association for French Language Studies through its workshop grant, and to the Aston Modern Languages Research Foundation for funding the proofreading of the manuscript.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents research within empirical financial economics with focus on liquidity and portfolio optimisation in the stock market. The discussion on liquidity is focused on measurement issues, including TAQ data processing and measurement of systematic liquidity factors (FSO). Furthermore, a framework for treatment of the two topics in combination is provided. The liquidity part of the thesis gives a conceptual background to liquidity and discusses several different approaches to liquidity measurement. It contributes to liquidity measurement by providing detailed guidelines on the data processing needed for applying TAQ data to liquidity research. The main focus, however, is the derivation of systematic liquidity factors. The principal component approach to systematic liquidity measurement is refined by the introduction of moving and expanding estimation windows, allowing for time-varying liquidity co-variances between stocks. Under several liability specifications, this improves the ability to explain stock liquidity and returns, as compared to static window PCA and market average approximations of systematic liquidity. The highest ability to explain stock returns is obtained when using inventory cost as a liquidity measure and a moving window PCA as the systematic liquidity derivation technique. Systematic factors of this setting also have a strong ability in explaining a cross-sectional liquidity variation. Portfolio optimisation in the FSO framework is tested in two empirical studies. These contribute to the assessment of FSO by expanding the applicability to stock indexes and individual stocks, by considering a wide selection of utility function specifications, and by showing explicitly how the full-scale optimum can be identified using either grid search or the heuristic search algorithm of differential evolution. The studies show that relative to mean-variance portfolios, FSO performs well in these settings and that the computational expense can be mitigated dramatically by application of differential evolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Natural language understanding (NLU) aims to map sentences to their semantic mean representations. Statistical approaches to NLU normally require fully-annotated training data where each sentence is paired with its word-level semantic annotations. In this paper, we propose a novel learning framework which trains the Hidden Markov Support Vector Machines (HM-SVMs) without the use of expensive fully-annotated data. In particular, our learning approach takes as input a training set of sentences labeled with abstract semantic annotations encoding underlying embedded structural relations and automatically induces derivation rules that map sentences to their semantic meaning representations. The proposed approach has been tested on the DARPA Communicator Data and achieved 93.18% in F-measure, which outperforms the previously proposed approaches of training the hidden vector state model or conditional random fields from unaligned data, with a relative error reduction rate of 43.3% and 10.6% being achieved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Natural language understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machines (HM-SVMs). Our experimental results on the DARPA communicator data show that both CRFs and HM-SVMs outperform the baseline approach, previously proposed hidden vector state (HVS) model which is also trained on abstract semantic annotations. In addition, the proposed framework shows superior performance than two other baseline approaches, a hybrid framework combining HVS and HM-SVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved in F-measure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.