957 resultados para six minute step test
Resumo:
Photo album title: Three Men in a Six. President H. B. Joy; Chief Engineer Russell Huff; General Superintendent B. F. Roberts
Resumo:
To understand performance of evasive and interceptive actions it is important to know how people decide when to initiate a movement - initiating at the 'right' moment is often essential for successful performance. It has been proposed that initiation is triggered when a perceptually derived quantity reaches an invariant criterion value. Candidate quantities include time-to-collision (TTC), distance, and rate of image expansion ( ROE), all of which have received empirical support. We studied initiation of an evasive manoeuvre in a computer-simulated steering task in which the observer was required to steer through a stationary visual environment and avoid colliding with an obstacle in their path. The results could not be explained by hypotheses which propose that evasive manoeuvre initiation is based on a fixed criterion value of TTC or distance. The overall pattern was, however, consistent with the use of a criterion ROE value. This was further tested by analyses designed to directly evaluate whether the ROE value used to initiate the response was the same across experimental conditions. Only two of the six participants showed evidence for using the ROE strategy.
Resumo:
Dizziness and or unsteadiness, associated with episodes of loss of balance, are frequent complaints in those suffering from persistent problems following a whiplash injury. Research has been inconclusive with respect to possible aetiology, discriminative tests and analyses used. The aim of this pilot research was to identify the test conditions and the most appropriate method for the analysis of sway that may differentiate subjects with persistent whiplash associated disorders (WAD) from healthy controls. The six conditions of the Clinical Test for Sensory Interaction in Balance was performed in both comfortable and tandem stance in 20 subjects with persistent WAD compared to 20 control subjects. The analyses were carried out using a traditional method of measurement, total sway distance, to results obtained from the use of wavelet analysis. Subjects with WAD were significantly less able to complete the tandem stance tests on a firm surface than controls. In comfortable stance, using wavelet analysis, significant differences between subjects with WAD and the control group were evident in total energy of the trace for all test conditions apart from eyes open on the firm surface. In contrast, the results of the analysis using total sway distance revealed no significant differences between groups across all six conditions. Wavelet analysis may be more appropriate for detecting disturbances in balance in whiplash subjects because the technique allows separation of the noise from the underlying systematic effect of sway. These findings will be used to direct future studies on the aeitiology of balance disturbances in WAD. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The focus of this research was defined by a poorly characterised filtration train employed to clarify culture broth containing monoclonal antibodies secreted by GS-NSO cells: the filtration train blinded unpredictably and the ability of the positively charged filters to adsorb DNA from process material was unknown. To direct the development of an assay to quantify the ability of depth filters to adsorb DNA, the molecular weight of DNA from a large-scale, fed-batch, mammalian cell culture vessel was evaluated as process material passed through the initial stages of the purification scheme. High molecular weight DNA was substantially cleared from the broth after passage through a disc stack centrifuge and the remaining low molecular weight DNA was largely unaffected by passage through a series of depth filters and a sterilising grade membrane. Removal of high molecular weight DNA was shown to be coupled with clarification of the process stream. The DNA from cell culture supernatant showed a pattern of internucleosomal cleavage of chromatin when fractionated by electrophoresis but the presence of both necrotic and apoptotic cells throughout the fermentation meant that the origin of the fragmented DNA could not be unequivocally determined. An intercalating fluorochrome, PicoGreen, was elected for development of a suitable DNA assay because of its ability to respond to low molecular weight DNA. It was assessed for its ability to determine the concentration of DNA in clarified mammalian cell culture broths containing pertinent monoclonal antibodies. Fluorescent signal suppression was ameliorated by sample dilution or by performing the assay above the pI of secreted IgG. The source of fluorescence in clarified culture broth was validated by incubation with RNase A and DNase I. At least 89.0 % of fluorescence was attributable to nucleic acid and pre-digestion with RNase A was shown to be a requirement for successful quantification of DNA in such samples. Application of the fluorescence based assay resulted in characterisation of the physical parameters governing adsorption of DNA by various positively charged depth filters and membranes in test solutions and the DNA adsorption profile of the manufacturing scale filtration train. Buffers that reduced or neutralised the depth filter or membrane charge, and those that impeded hydrophobic interactions were shown to affect their operational capacity, demonstrating that DNA was adsorbed by a combination of electrostatic and hydrophobic interactions. Production-scale centrifugation of harvest broth containing therapeutic protein resulted in the reduction of total DNA in the process stream from 79.8 μg m1-1 to 9.3 μg m1-1 whereas the concentration of DNA in the supernatant of pre-and post-filtration samples had only marginally reduced DNA content: from 6.3 to 6.0 μg m1-1 respectively. Hence the filtration train was shown to ineffective in DNA removal. Historically, blinding of the depth filters had been unpredictable with data such as numbers of viable cells, non-viable cells, product titre, or process shape (batch, fed-batch, or draw and fill) failing to inform on the durability of depth filters in the harvest step. To investigate this, key fouling contaminants were identified by challenging depth filters with the same mass of one of the following: viable healthy cells, cells that had died by the process of apoptosis, and cells that had died through the process of necrosis. The pressure increase across a Cuno Zeta Plus 10SP depth filter was 2.8 and 16.5 times more sensitive to debris from apoptotic and necrotic cells respectively, when compared to viable cells. The condition of DNA released into the culture broth was assessed. Necrotic cells released predominantly high molecular weight DNA in contrast to apoptotic cells which released chiefly low molecular weight DNA. The blinding of the filters was found to be largely unaffected by variations in the particle size distribution of material in, and viscosity of, solutions with which they were challenged. The exceptional response of the depth filters to necrotic cells may suggest the cause of previously noted unpredictable filter blinding whereby a number of necrotic cells have a more significant impact on the life of a depth filter than a similar number of viable or apoptotic cells. In a final set of experiments the pressure drop caused by non-viable necrotic culture broths which had been treated with DNase I or benzonase was found to be smaller when compared to untreated broths: the abilities of the enzyme treated cultures to foul the depth filter were reduced by 70.4% and 75.4% respectively indicating the importance of DNA in the blinding of the depth filter studied.
Resumo:
Time after time… and aspect and mood. Over the last twenty five years, the study of time, aspect and - to a lesser extent - mood acquisition has enjoyed increasing popularity and a constant widening of its scope. In such a teeming field, what can be the contribution of this book? We believe that it is unique in several respects. First, this volume encompasses studies from different theoretical frameworks: functionalism vs generativism or function-based vs form-based approaches. It also brings together various sub-fields (first and second language acquisition, child and adult acquisition, bilingualism) that tend to evolve in parallel rather than learn from each other. A further originality is that it focuses on a wide range of typologically different languages, and features less studied languages such as Korean and Bulgarian. Finally, the book gathers some well-established scholars, young researchers, and even research students, in a rich inter-generational exchange, that ensures the survival but also the renewal and the refreshment of the discipline. The book at a glance The first part of the volume is devoted to the study of child language acquisition in monolingual, impaired and bilingual acquisition, while the second part focuses on adult learners. In this section, we will provide an overview of each chapter. The first study by Aviya Hacohen explores the acquisition of compositional telicity in Hebrew L1. Her psycholinguistic approach contributes valuable data to refine theoretical accounts. Through an innovating methodology, she gathers information from adults and children on the influence of definiteness, number, and the mass vs countable distinction on the constitution of a telic interpretation of the verb phrase. She notices that the notion of definiteness is mastered by children as young as 10, while the mass/count distinction does not appear before 10;7. However, this does not entail an adult-like use of telicity. She therefore concludes that beyond definiteness and noun type, pragmatics may play an important role in the derivation of Hebrew compositional telicity. For the second chapter we move from a Semitic language to a Slavic one. Milena Kuehnast focuses on the acquisition of negative imperatives in Bulgarian, a form that presents the specificity of being grammatical only with the imperfective form of the verb. The study examines how 40 Bulgarian children distributed in two age-groups (15 between 2;11-3;11, and 25 between 4;00 and 5;00) develop with respect to the acquisition of imperfective viewpoints, and the use of imperfective morphology. It shows an evolution in the recourse to expression of force in the use of negative imperatives, as well as the influence of morphological complexity on the successful production of forms. With Yi-An Lin’s study, we concentrate both on another type of informant and of framework. Indeed, he studies the production of children suffering from Specific Language Impairment (SLI), a developmental language disorder the causes of which exclude cognitive impairment, psycho-emotional disturbance, and motor-articulatory disorders. Using the Leonard corpus in CLAN, Lin aims to test two competing accounts of SLI (the Agreement and Tense Omission Model [ATOM] and his own Phonetic Form Deficit Model [PFDM]) that conflicts on the role attributed to spellout in the impairment. Spellout is the point at which the Computational System for Human Language (CHL) passes over the most recently derived part of the derivation to the interface components, Phonetic Form (PF) and Logical Form (LF). ATOM claims that SLI sufferers have a deficit in their syntactic representation while PFDM suggests that the problem only occurs at the spellout level. After studying the corpus from the point of view of tense / agreement marking, case marking, argument-movement and auxiliary inversion, Lin finds further support for his model. Olga Gupol, Susan Rohstein and Sharon Armon-Lotem’s chapter offers a welcome bridge between child language acquisition and multilingualism. Their study explores the influence of intensive exposure to L2 Hebrew on the development of L1 Russian tense and aspect morphology through an elicited narrative. Their informants are 40 Russian-Hebrew sequential bilingual children distributed in two age groups 4;0 – 4;11 and 7;0 - 8;0. They come to the conclusion that bilingual children anchor their narratives in perfective like monolinguals. However, while aware of grammatical aspect, bilinguals lack the full form-function mapping and tend to overgeneralize the imperfective on the principles of simplicity (as imperfective are the least morphologically marked forms), universality (as it covers more functions) and interference. Rafael Salaberry opens the second section on foreign language learners. In his contribution, he reflects on the difficulty L2 learners of Spanish encounter when it comes to distinguishing between iterativity (conveyed with the use of the preterite) and habituality (expressed through the imperfect). He examines in turn the theoretical views that see, on the one hand, habituality as part of grammatical knowledge and iterativity as pragmatic knowledge, and on the other hand both habituality and iterativity as grammatical knowledge. He comes to the conclusion that the use of preterite as a default past tense marker may explain the impoverished system of aspectual distinctions, not only at beginners but also at advanced levels, which may indicate that the system is differentially represented among L1 and L2 speakers. Acquiring the vast array of functions conveyed by a form is therefore no mean feat, as confirmed by the next study. Based on the prototype theory, Kathleen Bardovi-Harlig’s chapter focuses on the development of the progressive in L2 English. It opens with an overview of the functions of the progressive in English. Then, a review of acquisition research on the progressive in English and other languages is provided. The bulk of the chapter reports on a longitudinal study of 16 learners of L2 English and shows how their use of the progressive expands from the prototypical uses of process and continuousness to the less prototypical uses of repetition and future. The study concludes that the progressive spreads in interlanguage in accordance with prototype accounts. However, it suggests additional stages, not predicted by the Aspect Hypothesis, in the development from activities and accomplishments at least for the meaning of repeatedness. A similar theoretical framework is adopted in the following chapter, but it deals with a lesser studied language. Hyun-Jin Kim revisits the claims of the Aspect Hypothesis in relation to the acquisition of L2 Korean by two L1 English learners. Inspired by studies on L2 Japanese, she focuses on the emergence and spread of the past / perfective marker ¬–ess- and the progressive – ko iss- in the interlanguage of her informants throughout their third and fourth semesters of study. The data collected through six sessions of conversational interviews and picture description tasks seem to support the Aspect Hypothesis. Indeed learners show a strong association between past tense and accomplishments / achievements at the start and a gradual extension to other types; a limited use of past / perfective marker with states and an affinity of progressive with activities / accomplishments and later achievements. In addition, - ko iss– moves from progressive to resultative in the specific category of Korean verbs meaning wear / carry. While the previous contributions focus on function, Evgeniya Sergeeva and Jean-Pierre Chevrot’s is interested in form. The authors explore the acquisition of verbal morphology in L2 French by 30 instructed native speakers of Russian distributed in a low and high levels. They use an elicitation task for verbs with different models of stem alternation and study how token frequency and base forms influence stem selection. The analysis shows that frequency affects correct production, especially among learners with high proficiency. As for substitution errors, it appears that forms with a simple structure are systematically more frequent than the target form they replace. When a complex form serves as a substitute, it is more frequent only when it is replacing another complex form. As regards the use of base forms, the 3rd person singular of the present – and to some extent the infinitive – play this role in the corpus. The authors therefore conclude that the processing of surface forms can be influenced positively or negatively by the frequency of the target forms and of other competing stems, and by the proximity of the target stem to a base form. Finally, Martin Howard’s contribution takes up the challenge of focusing on the poorer relation of the TAM system. On the basis of L2 French data obtained through sociolinguistic interviews, he studies the expression of futurity, conditional and subjunctive in three groups of university learners with classroom teaching only (two or three years of university teaching) or with a mixture of classroom teaching and naturalistic exposure (2 years at University + 1 year abroad). An analysis of relative frequencies leads him to suggest a continuum of use going from futurate present to conditional with past hypothetic conditional clauses in si, which needs to be confirmed by further studies. Acknowledgements The present volume was inspired by the conference Acquisition of Tense – Aspect – Mood in First and Second Language held on 9th and 10th February 2008 at Aston University (Birmingham, UK) where over 40 delegates from four continents and over a dozen countries met for lively and enjoyable discussions. This collection of papers was double peer-reviewed by an international scientific committee made of Kathleen Bardovi-Harlig (Indiana University), Christine Bozier (Lund Universitet), Alex Housen (Vrije Universiteit Brussel), Martin Howard (University College Cork), Florence Myles (Newcastle University), Urszula Paprocka (Catholic University of Lublin), †Clive Perdue (Université Paris 8), Michel Pierrard (Vrije Universiteit Brussel), Rafael Salaberry (University of Texas at Austin), Suzanne Schlyter (Lund Universitet), Richard Towell (Salford University), and Daniel Véronique (Université d’Aix-en-Provence). We are very much indebted to that scientific committee for their insightful input at each step of the project. We are also thankful for the financial support of the Association for French Language Studies through its workshop grant, and to the Aston Modern Languages Research Foundation for funding the proofreading of the manuscript.
Resumo:
Background: Screening for congenital heart defects (CHDs) relies on antenatal ultrasound and postnatal clinical examination; however, life-threatening defects often go undetected. Objective: To determine the accuracy, acceptability and cost-effectiveness of pulse oximetry as a screening test for CHDs in newborn infants. Design: A test accuracy study determined the accuracy of pulse oximetry. Acceptability of testing to parents was evaluated through a questionnaire, and to staff through focus groups. A decision-analytic model was constructed to assess cost-effectiveness. Setting: Six UK maternity units. Participants: These were 20,055 asymptomatic newborns at = 35 weeks’ gestation, their mothers and health-care staff. Interventions: Pulse oximetry was performed prior to discharge from hospital and the results of this index test were compared with a composite reference standard (echocardiography, clinical follow-up and follow-up through interrogation of clinical databases). Main outcome measures: Detection of major CHDs – defined as causing death or requiring invasive intervention up to 12 months of age (subdivided into critical CHDs causing death or intervention before 28 days, and serious CHDs causing death or intervention between 1 and 12 months of age); acceptability of testing to parents and staff; and the cost-effectiveness in terms of cost per timely diagnosis. Results: Fifty-three of the 20,055 babies screened had a major CHD (24 critical and 29 serious), a prevalence of 2.6 per 1000 live births. Pulse oximetry had a sensitivity of 75.0% [95% confidence interval (CI) 53.3% to 90.2%] for critical cases and 49.1% (95% CI 35.1% to 63.2%) for all major CHDs. When 23 cases were excluded, in which a CHD was already suspected following antenatal ultrasound, pulse oximetry had a sensitivity of 58.3% (95% CI 27.7% to 84.8%) for critical cases (12 babies) and 28.6% (95% CI 14.6% to 46.3%) for all major CHDs (35 babies). False-positive (FP) results occurred in 1 in 119 babies (0.84%) without major CHDs (specificity 99.2%, 95% CI 99.0% to 99.3%). However, of the 169 FPs, there were six cases of significant but not major CHDs and 40 cases of respiratory or infective illness requiring medical intervention. The prevalence of major CHDs in babies with normal pulse oximetry was 1.4 (95% CI 0.9 to 2.0) per 1000 live births, as 27 babies with major CHDs (6 critical and 21 serious) were missed. Parent and staff participants were predominantly satisfied with screening, perceiving it as an important test to detect ill babies. There was no evidence that mothers given FP results were more anxious after participating than those given true-negative results, although they were less satisfied with the test. White British/Irish mothers were more likely to participate in the study, and were less anxious and more satisfied than those of other ethnicities. The incremental cost-effectiveness ratio of pulse oximetry plus clinical examination compared with examination alone is approximately £24,900 per timely diagnosis in a population in which antenatal screening for CHDs already exists. Conclusions: Pulse oximetry is a simple, safe, feasible test that is acceptable to parents and staff and adds value to existing screening. It is likely to identify cases of critical CHDs that would otherwise go undetected. It is also likely to be cost-effective given current acceptable thresholds. The detection of other pathologies, such as significant CHDs and respiratory and infective illnesses, is an additional advantage. Other pulse oximetry techniques, such as perfusion index, may enhance detection of aortic obstructive lesions.
Resumo:
Three dimensions of subordinate-supervisor relations (affective attachment, deference to supervisor, and personal-life inclusion) that had been found by Y. Chen, Friedman, Yu, Fang, and Lu to be characteristic of a guanxi relationship between subordinates and their supervisors in China were surveyed in Taiwan, Singapore, and six non-Chinese cultural contexts. The Affective Attachment and Deference subscales demonstrated full metric invariance whereas the Personal-Life Inclusion subscale was found to have partial metric invariance across all eight samples. Structural equation modeling revealed that the affective attachment dimension had a cross-nationally invariant positive relationship to affective organizational commitment and a negative relationship to turnover intention. The deference to the supervisor dimension had invariant positive relationships with both affective and normative organizational commitment. The personal-life inclusion dimension was unrelated to all outcomes. These results indicate the relevance of aspects of guanxi to superior-subordinate relations in non-Chinese cultures. Studies of indigenous concepts can contribute to a broader understanding of organizational behavior. © The Author(s) 2014.
Resumo:
Purpose: To evaluate the effect of reducing the number of visual acuity measurements made in a defocus curve on the quality of data quantified. Setting: Midland Eye, Solihull, United Kingdom. Design: Evaluation of a technique. Methods: Defocus curves were constructed by measuring visual acuity on a distance logMAR letter chart, randomizing the test letters between lens presentations. The lens powers evaluated ranged between +1.50 diopters (D) and -5.00 D in 0.50 D steps, which were also presented in a randomized order. Defocus curves were measured binocularly with the Tecnis diffractive, Rezoom refractive, Lentis rotationally asymmetric segmented (+3.00 D addition [add]), and Finevision trifocal multifocal intraocular lenses (IOLs) implanted bilaterally, and also for the diffractive IOL and refractive or rotationally asymmetric segmented (+3.00 D and +1.50 D adds) multifocal IOLs implanted contralaterally. Relative and absolute range of clear-focus metrics and area metrics were calculated for curves fitted using 0.50 D, 1.00 D, and 1.50 D steps and a near add-specific profile (ie, distance, half the near add, and the full near-add powers). Results: A significant difference in simulated results was found in at least 1 of the relative or absolute range of clear-focus or area metrics for each of the multifocal designs examined when the defocus-curve step size was increased (P<.05). Conclusion: Faster methods of capturing defocus curves from multifocal IOL designs appear to distort the metric results and are therefore not valid. Financial Disclosure: No author has a financial or proprietary interest in any material or method mentioned. © 2013 ASCRS and ESCRS.
Resumo:
Field material testing provides firsthand information on pavement conditions which are most helpful in evaluating performance and identifying preventive maintenance or overlay strategies. High variability of field asphalt concrete due to construction raises the demand for accuracy of the test. Accordingly, the objective of this study is to propose a reliable and repeatable methodology to evaluate the fracture properties of field-aged asphalt concrete using the overlay test (OT). The OT is selected because of its efficiency and feasibility for asphalt field cores with diverse dimensions. The fracture properties refer to the Paris’ law parameters based on the pseudo J-integral (A and n) because of the sound physical significance of the pseudo J-integral with respect to characterizing the cracking process. In order to determine A and n, a two-step OT protocol is designed to characterize the undamaged and damaged behaviors of asphalt field cores. To ensure the accuracy of determined undamaged and fracture properties, a new analysis method is then developed for data processing, which combines the finite element simulations and mechanical analysis of viscoelastic force equilibrium and evolution of pseudo displacement work in the OT specimen. Finally, theoretical equations are derived to calculate A and n directly from the OT test data. The accuracy of the determined fracture properties is verified. The proposed methodology is applied to a total of 27 asphalt field cores obtained from a field project in Texas, including the control Hot Mix Asphalt (HMA) and two types of warm mix asphalt (WMA). The results demonstrate a high linear correlation between n and −log A for all the tested field cores. Investigations of the effect of field aging on the fracture properties confirm that n is a good indicator to quantify the cracking resistance of asphalt concrete. It is also indicated that summer climatic condition clearly accelerates the rate of aging. The impact of the WMA technologies on fracture properties of asphalt concrete is visualized by comparing the n-values. It shows that the Evotherm WMA technology slightly improves the cracking resistance, while the foaming WMA technology provides the comparable fracture properties with the HMA. After 15 months aging in the field, the cracking resistance does not exhibit significant difference between HMA and WMAs, which is confirmed by the observations of field distresses.
Resumo:
This study developed a reliable and repeatable methodology to evaluate the fracture properties of asphalt mixtures with an overlay test (OT). In the proposed methodology, first, a two-step OT protocol was used to characterize the undamaged and damaged behaviors of asphalt mixtures. Second, a new methodology combining the mechanical analysis of viscoelastic force equilibrium in the OT specimen and finite element simulations was used to determine the undamaged properties and crack growth function of asphalt mixtures. Third, a modified Paris's law replacing the stress intensity factor by the pseudo J-integral was employed to characterize the fracture behavior of asphalt mixtures. Theoretical equations were derived to calculate the parameters A and n (defined as the fracture properties) in the modified Paris's law. The study used a detailed example to calculate A and n from the OT data. The proposed methodology was successfully applied to evaluate the impact of warm-mix asphalt (WMA) technologies on fracture properties. The results of the tested specimens showed that Evotherm WMA technology slightly improved the cracking resistance of asphalt mixtures, while foaming WMA technology provided comparable fracture properties. In addition, the study found that A decreased with the increase in n in general. A linear relationship between 2log(A) and n was established.
Resumo:
This research document is motivated by the need for a systemic, efficient quality improvement methodology at universities. There exists no methodology designed for a total quality management (TQM) program in a university. The main objective of this study is to develop a TQM Methodology that enables a university to efficiently develop an integral total quality improvement (TQM) Plan. ^ Current research focuses on the need of improving the quality of universities, the study of the perceived best quality universities, and the measurement of the quality of universities through rankings. There is no evidence of research on how to plan for an integral quality improvement initiative for the university as a whole, which is the main contribution of this study. ^ This research is built on various reference TQM models and criteria provided by ISO 9000, Baldrige and Six Sigma; and educational accreditation criteria found in ABET and SACS. The TQM methodology is proposed by following a seven-step metamethodology. The proposed methodology guides the user to develop a TQM plan in five sequential phases: initiation, assessment, analysis, preparation and acceptance. Each phase defines for the user its purpose, key activities, input requirements, controls, deliverables, and tools to use. The application of quality concepts in education and higher education is particular; since there are unique factors in education which ought to be considered. These factors shape the quality dimensions in a university and are the main inputs to the methodology. ^ The proposed TQM Methodology is used to guide the user to collect and transform appropriate inputs to a holistic TQM Plan, ready to be implemented by the university. Different input data will lead to a unique TQM plan for the specific university at the time. It may not necessarily transform the university into a world-class institution, but aims to strive for stakeholder-oriented improvements, leading to a better alignment with its mission and total quality advancement. ^ The proposed TQM methodology is validated in three steps. First, it is verified by going through a test activity as part of the meta-methodology. Secondly, the methodology is applied to a case university to develop a TQM plan. Lastly, the methodology and the TQM plan both are verified by an expert group consisting of TQM specialists and university administrators. The proposed TQM methodology is applicable to any university at all levels of advancement, regardless of changes in its long-term vision and short-term needs. It helps to assure the quality of a TQM plan, while making the process more systemic, efficient, and cost effective. This research establishes a framework with a solid foundation for extending the proposed TQM methodology into other industries. ^
Resumo:
This study is based on rock mechanical tests of samples from platform carbonate strata to document their petrophysical properties and determine their potential for porosity loss by mechanical compaction. Sixteen core-plug samples, including eleven limestones and five dolostones, from Miocene carbonate platforms on the Marion Plateau, offshore northeast Australia, were tested at vertical effective stress, sigma1', of 0-70 MPa, as lateral strain was kept equal to zero. The samples were deposited as bioclastic facies in platform-top settings having paleo-water depths of <10-90 m. They were variably cemented with low-Mg calcite and five of the samples were dolomitized before burial to present depths of 39-635 m below sea floor with porosities of 8-46%. Ten samples tested under dry conditions had up to 0.22% strain at sigma1' = 50 MPa, whereas six samples tested saturated with brine, under drained conditions, had up to 0.33% strain. The yield strength was reached in five of the plugs. The measured strains show an overall positive correlation with porosity. Vp ranges from 3640 to 5660 m/s and Vs from 1840 to 3530 m/s. Poisson coefficient is 0.20-0.33 and Young's modulus at 30 MPa ranged between 5 and 40 GPa. Water saturated samples had lower shear moduli and slightly higher P- to S-wave velocity ratios. Creep at constant stress was observed only in samples affected by pore collapse, indicating propagation of microcracks. Although deposited as loose carbonate sand and mud, the studied carbonates acquired reef-like petrophysical properties by early calcite and dolomite cementation. The small strains observed experimentally at 50 MPa indicate that little mechanical compaction would occur at deeper burial. However, as these rocks are unlikely to preserve their present high porosities to 4-5 km depth, further porosity loss would proceed mainly by chemical compaction and cementation.
Resumo:
This research analyses the components of the organizational structure of the UFRN (Rio Grande do Norte Federal University) and to what extent they affect organizational performance. The study, classified as exploratory and descriptive, was conducted in two phases. The first phase consists of a pilot test to refine the research instrument and to identify the latent components of the organizational structure, and the second to characterize these components and thereby establish relationships with organizational performance. In the first phase, the research was conducted in 20 UFRN organizational units with the participation of 84 employees between technical-administrative and teachers, after considering missing values and outliers, while the second phase occurred in two stages: one conducted with 279 valid cases, consisting of technical-administrative and teachers of 37 UFRN units, and another with 112 managers of the institution in the 49 units identified in this research. The instrument adopted in the first phase was composed of 36 indicators of organizational structure, with six extracted and adapted from the instrument developed by Medeiros (2003) and 30 prepared based on the literature review, from Mintzberg (2012), Hall (1984), Vasconcellos and Hemsley (1997) and Seiffert and Costa (2007) and 7 performance indicators adapted from Fleury and Mills (2006), Vieira and Vieira (2003) and Kaplan and Norton (1997) from the self-assessment instrument in use by the university. In this stage the data were analyzed using the techniques of factor analysis and reliability analysis by means of Cronbach’s alpha, aiming to extract the factors representing the components of the organizational structure. In step 1 of the second phase, the instrument, refined and reduced in the previous phase, with 24 variables of organizational structure and 6 for performance was used, while in step 2, a semi-structured interview guide with questions, organized into nine organizational structure elements, was adopted aiming to gather information to understand the relationship of structure to performance of the UFRN. The techniques used in the second phase, as a whole, were factor analysis and reliability analysis to characterize the components extracted in the previous phase and to validate the performance variables and correlation analysis, regression and content analysis to establish and understand the relationship between structure and performance. The results showed, in the two stages, six latent components of organizational structure in the context under study: training and internalization, communication, hierarchy, decentralization, formalization and departmentalization - with high levels of Cronbach's alpha indexes - which can thereby be characterized as components of UFRN structure. Six performance indicators were validated in this study, showing them as efficient and highly reliable. Finally, it was found that the formalization, communication, decentralization, training and internalization components positively affect UFRN performance, while departmentalization has an adverse affect and hierarchy did not show a significant relationship. The results achieved in this work are important in future studies to support the development of a model structure that represents the specifics of the university
Antecedentes da intenção de uso de comentários de viagem on-line na escolha de um meio de hospedagem
Resumo:
The Internet is present in each step of a trip planning. The constant technological advances has made major changes in the tourism industry. This is noticeable by the growing number of people who share their travel experiences on the Internet. This study has aimed to analyze the factors that influence the use of the Online Travel Reviews (OTR) in choosing an accommodation. It was done an investigation into the comments available on the internet about information on touristic products and services, specifically about accommodations. The research proposed to understand the influencing factors of OTR, in the Brazilian context, through the Technology Acceptance Model, Motivational Theory, Similarity, and Trustworthiness. The methodology used was a descriptive-exploratory study, with a quantitative approach, and bibliographic research. The study used a Structural Equation Modeling technique called Partial Least Squares (PLS), to test and evaluate the proposed research model. Data collection was performed with 308 guests hosted in five hotels in Ponta Negra (Natal/RN), who have used the OTRs in choosing an accommodation. The research tested fifteen hypotheses, where nine were confirmed, and six were rejected. The results showed that guests have attitude and intention to use the OTRs to choose an accommodation.
Resumo:
Fixed-step-size (FSS) and Bayesian staircases are widely used methods to estimate sensory thresholds in 2AFC tasks, although a direct comparison of both types of procedure under identical conditions has not previously been reported. A simulation study and an empirical test were conducted to compare the performance of optimized Bayesian staircases with that of four optimized variants of FSS staircase differing as to up-down rule. The ultimate goal was to determine whether FSS or Bayesian staircases are the best choice in experimental psychophysics. The comparison considered the properties of the estimates (i.e. bias and standard errors) in relation to their cost (i.e. the number of trials to completion). The simulation study showed that mean estimates of Bayesian and FSS staircases are dependable when sufficient trials are given and that, in both cases, the standard deviation (SD) of the estimates decreases with number of trials, although the SD of Bayesian estimates is always lower than that of FSS estimates (and thus, Bayesian staircases are more efficient). The empirical test did not support these conclusions, as (1) neither procedure rendered estimates converging on some value, (2) standard deviations did not follow the expected pattern of decrease with number of trials, and (3) both procedures appeared to be equally efficient. Potential factors explaining the discrepancies between simulation and empirical results are commented upon and, all things considered, a sensible recommendation is for psychophysicists to run no fewer than 18 and no more than 30 reversals of an FSS staircase implementing the 1-up/3-down rule.