906 resultados para Placoderm Scales


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An historical review of the literature relating to placoderm scales preserved in association with articulated dermal plates, or as isolated units in microvertebrate assemblages, is followed by a discussion of their relevance in phylogenetic analyses of the Placodermi. The dentinous tissue forming the tubercles of Early Devonian acanthothoracid scales and dermal bone is similar to that of the dermal bone ornament of some osteostracans, and denticles of the vertebrate Skiichthys from the Ordovician Harding Sandstone. This similarity supports the proposition that the gnathostomes are the sister-group of the Osteostraci, with the Placodermi branching earliest within the gnathostomes, and the Acanthothoraci branching earliest within the Placodermi. The meso-semidentine in acanthothoracid tubercles, rather than semidentine (sensu stricto), is most likely to be synapomorphic for the Placodermi.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Calcareotis horizons in the Qasr and Hammamiyat members (Lower Devonian, ?Pragian and lower Emsian) of file Jawf Formation, northwestern Saudi Arabia, yielded a rich assemblage of microremains from acanthodian, placoderm. chondrichthyan, and sarcopterygian vertebrates. The most abundant elements are scales from acanthodians Nostolepis spp., Milesacanthus ancestralis n. sp., Canadatepis? sp., and Gomphonchus? fromensis. scales and dermal bone fragments from acanthothoracid and ?rhenanid placoderms, and teeth from onychodontids. Rarer occurrences include ?chondrichthyan scales of several different morphotypes, and petalichthid and ?ptyctodontid placoderm elements. The Qasr Member assemblage shows a close resemblance to slightly older faunas front the Lochkovian of Brittany and Spain. The Hammamiyat Member microvertebrate fauna shows closest affinity with that of the stratigraphically lower Qasr Member, with similarities also to coeval faunas from southeastern Australia, late Emsian/Eifelian faunas from west-central Europe, and the Givetian Aztec Siltstone fauna from Antarctica.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Principal Topic: It is well known that most new ventures suffer from a significant lack of resources, which increases the risk of failure (Shepherd, Douglas and Shanley, 2000) and makes it difficult to attract stakeholders and financing for the venture (Bhide & Stevenson, 1999). The Resource-Based View (RBV) (Barney, 1991; Wernerfelt, 1984) is a dominant theoretical base increasingly drawn on within Strategic Management. While theoretical contributions applying RBV in the domain of entrepreneurship can arguably be traced back to Penrose (1959), there has been renewed attention recently (e.g. Alvarez & Busenitz, 2001; Alvarez & Barney, 2004). This said, empirical work is in its infancy. In part, this may be due to a lack of well developed measuring instruments for testing ideas derived from RBV. The purpose of this study is to develop a measurement scales that can serve to assist such empirical investigations. In so doing we will try to overcome three deficiencies in current empirical measures used for the application of RBV to the entrepreneurship arena. First, measures for resource characteristics and configurations associated with typical competitive advantages found in entrepreneurial firms need to be developed. These include such things as alertness and industry knowledge (Kirzner, 1973), flexibility (Ebben & Johnson, 2005), strong networks (Lee et al., 2001) and within knowledge intensive contexts, unique technical expertise (Wiklund and Shepard, 2003). Second, the RBV has the important limitations of being relatively static and modelled on large, established firms. In that context, traditional RBV focuses on competitive advantages. However, newly established firms often face disadvantages, especially those associated with the liabilities of newness (Aldrich & Auster, 1986). It is therefore important in entrepreneurial contexts to expand to an investigation of responses to competitive disadvantage through an RBV lens. Conversely, recent research has suggested that resource constraints actually have a positive effect on firm growth and performance under some circumstances (e.g., George, 2005; Katila & Shane, 2005; Mishina et al., 2004; Mosakowski, 2002; cf. also Baker & Nelson, 2005). Third, current empirical applications of RBV measured levels or amounts of particular resources available to a firm. They infer that these resources deliver firms competitive advantage by establishing a relationship between these resource levels and performance (e.g. via regression on profitability). However, there is the opportunity to directly measure the characteristics of resource configurations that deliver competitive advantage, such as Barney´s well known VRIO (Valuable, Rare, Inimitable and Organized) framework (Barney, 1997). Key Propositions and Methods: The aim of our study is to develop and test scales for measuring resource advantages (and disadvantages) and inimitability for entrepreneurial firms. The study proceeds in three stages. The first stage developed our initial scales based on earlier literature. Where possible, we adapt scales based on previous work. The first block of the scales related to the level of resource advantages and disadvantages. Respondents were asked the degree to which each resource category represented an advantage or disadvantage relative to other businesses in their industry on a 5 point response scale: Major Disadvantage, Slight Disadvantage, No Advantage or Disadvantage, Slight Advantage and Major Advantage. Items were developed as follows. Network capabilities (3 items) were adapted from (Madsen, Alsos, Borch, Ljunggren & Brastad, 2006). Knowledge resources marketing expertise / customer service (3 items) and technical expertise (3 items) were adapted from Wiklund and Shepard (2003). flexibility (2 items), costs (4 items) were adapted from JIBS B97. New scales were developed for industry knowledge / alertness (3 items) and product / service advantages. The second block asked the respondent to nominate the most important resource advantage (and disadvantage) of the firm. For the advantage, they were then asked four questions to determine how easy it would be for other firms to imitate and/or substitute this resource on a 5 point likert scale. For the disadvantage, they were asked corresponding questions related to overcoming this disadvantage. The second stage involved two pre-tests of the instrument to refine the scales. The first was an on-line convenience sample of 38 respondents. The second pre-test was a telephone interview with a random sample of 31 Nascent firms and 47 Young firms (< 3 years in operation) generated using a PSED method of randomly calling households (Gartner et al. 2004). Several items were dropped or reworded based on the pre-tests. The third stage (currently in progress) is part of Wave 1 of CAUSEE (Nascent Firms) and FEDP (Young Firms), a PSED type study being conducted in Australia. The scales will be tested and analysed with a random sample of approximately 700 Nascent and Young firms respectively. In addition, a judgement sample of approximately 100 high potential businesses in each category will be included. Findings and Implications: The paper will report the results of the main study (stage 3 – currently data collection is in progress) will allow comparison of the level of resource advantage / disadvantage across various sub-groups of the population. Of particular interest will be a comparison of the high potential firms with the random sample. Based on the smaller pre-tests (N=38 and N=78) the factor structure of the items confirmed the distinctiveness of the constructs. The reliabilities are within an acceptable range: Cronbach alpha ranged from 0.701 to 0.927. The study will provide an opportunity for researchers to better operationalize RBV theory in studies within the domain of entrepreneurship. This is a fundamental requirement for the ability to test hypotheses derived from RBV in systematic, large scale research studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This present paper reviews the reliability and validity of visual analogue scales (VAS) in terms of (1) their ability to predict feeding behaviour, (2) their sensitivity to experimental manipulations, and (3) their reproducibility. VAS correlate with, but do not reliably predict, energy intake to the extent that they could be used as a proxy of energy intake. They do predict meal initiation in subjects eating their normal diets in their normal environment. Under laboratory conditions, subjectively rated motivation to eat using VAS is sensitive to experimental manipulations and has been found to be reproducible in relation to those experimental regimens. Other work has found them not to be reproducible in relation to repeated protocols. On balance, it would appear, in as much as it is possible to quantify, that VAS exhibit a good degree of within-subject reliability and validity in that they predict with reasonable certainty, meal initiation and amount eaten, and are sensitive to experimental manipulations. This reliability and validity appears more pronounced under the controlled (but more arti®cial) conditions of the laboratory where the signal : noise ratio in experiments appears to be elevated relative to real life. It appears that VAS are best used in within-subject, repeated-measures designs where the effect of different treatments can be compared under similar circumstances. They are best used in conjunction with other measures (e.g. feeding behaviour, changes in plasma metabolites) rather than as proxies for these variables. New hand-held electronic appetite rating systems (EARS) have been developed to increase reliability of data capture and decrease investigator workload. Recent studies have compared these with traditional pen and paper (P&P) VAS. The EARS have been found to be sensitive to experimental manipulations and reproducible relative to P&P. However, subjects appear to exhibit a signi®cantly more constrained use of the scale when using the EARS relative to the P&P. For this reason it is recommended that the two techniques are not used interchangeably

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extensive data used to quantify broad soil C changes (without information about causation), coupled with intensive data used for attribution of changes to specific management practices, could form the basis of an efficient national grassland soil C monitoring network. Based on variability of extensive (USDA/NRCS pedon database) and intensive field-level soil C data, we evaluated the efficacy of future sample collection to detect changes in soil C in grasslands. Potential soil C changes at a range of spatial scales related to changes in grassland management can be verified (alpha=0.1) after 5 years with collection of 34, 224, 501 samples at the county, state, or national scales, respectively. Farm-level analysis indicates that equivalent numbers of cores and distinct groups of cores (microplots) results in lowest soil C coefficients of variation for a variety of ecosystems. Our results suggest that grassland soil C changes can be precisely quantified using current technology at scales ranging from farms to the entire nation. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 27-item Intolerance of Uncertainty Scale (IUS) has become one of the most frequently used measure of Intolerance of Uncertainty. More recently, an abridged, 12-item version of the IUS has been developed. The current research used clinical (n = 50) and non-clinical (n = 56) samples to examine and compare the psychometric properties of both versions of the IUS. The two scales showed good internal consistency at both the total and subscale level and had satisfactory test-retest reliability. Both versions were correlated with worry and trait anxiety and had satisfactory concurrent validity. Significant differences between the scores of the clinical and non-clinical sample supported discriminant validity. Predictive validity was also supported for the two scales. Total scores, in the case of the clinical sample, and a subscale, in the case of the non-clinical sample, significantly predicted pathological worry and trait anxiety. Overall, the clinicians and researchers can use either version of the IUS with confidence, due to their sound psychometric properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose:  The aim of this study was to investigate the extent and pattern of use of grading scales for contact lens complications (‘grading scales’) in optometric practice. Methods:  An anonymous postal survey was sent to all 756 members of the Queensland Division of Optometrists Association Australia. Information was elicited relating to level of experience, practice type and location, and mode of usage of grading scales. Results:  Survey forms were returned by 237 optometrists, representing a 31 per cent response rate. The majority of respondents (61 per cent) reported using grading scales frequently in practice, while 65 per cent of these preferred to use the Efron Grading Scales for Contact Lens Complications. Seventy-six per cent of optometrists use a method of incremental grading rather than simply grading with whole numbers. Grading scales are more likely to be used by optometrists who have recently graduated (p < 0.001), have a postgraduate certificate in ocular therapeutics (p = 0.018), see more contact lens patients (p = 0.027) and use other forms of grading scales (p < 0.001). The most frequently graded ocular conditions were corneal staining, papillary conjunctivitis and conjunctival redness. The main reasons for not using grading scales included a preference for sketches, photographs or descriptions (87 per cent) and unavailability of scales (29 per cent). Conclusion:  Grading scales for contact lens complications are used extensively in optometric practice for a variety of purposes. This tool can now be considered as an expected norm in contact lens practice. We advocate the incorporation of such grading scales into professional guidelines and standards for good optometric clinical practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. To devise and validate artist-rendered grading scales for contact lens complications Methods. Each of eight tissue complications of contact lens wear (listed under 'Results') was painted by a skilled ophthalmic artist (Terry R. Tarrant) in five grades of severity: 0 (normal), 1 (trace), 2 (mild), 3 (moderate) and 4 (severe). A representative slit lamp photograph of a tissue response of each of the eight complications was shown to 404 contact lens practitioners who had never before used clinical grading scales. The practitioners were asked to grade each tissue response to the nearest 0.1 grade unit by interpolation. Results. The standard deviation (± s.d.) of the 404 responses for each tissue complication is tabulated below:_ing_ 0.5 Endothelial pplymegethisjij-4 0.7 Epithelial microcysts 0.5 Endothelial blebs_ 0.4 Stromal edema_onjunctiva! hyperemia 0.4 Stromal neovascularization 0.4 Papillary conjunctivitis 0.5 The frequency distributions and best-fit normal curves were also plotted. The precision of grading (s.d. x 2) ranged from 0.8 to 1.4, with a mean precision of 1.0. Conclusions. Grading scales afford contact lens practitioners with a method of quantifying the severity of adverse tissue responses to contact lens wear. It is noteworthy that the statistically verified precision of grading (1.0 scale unit) concurs precisely with the essential design feature of the grading scales that each grading step of 1.0 corresponds to clinically significant difference in severity. Thus, as a general rule, a difference or change in grade of > 1.0 can be taken to be both clinically and statistically significant when using these grading scales. Trained observers are likely to achieve even greater grading precision. Supported by Hydron Limited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of local accumulation time (LAT) was introduced by Berezhkovskii and coworkers in 2010–2011 to give a finite measure of the time required for the transient solution of a reaction–diffusion equation to approach the steady–state solution (Biophys J. 99, L59 (2010); Phys Rev E. 83, 051906 (2011)). Such a measure is referred to as a critical time. Here, we show that LAT is, in fact, identical to the concept of mean action time (MAT) that was first introduced by McNabb in 1991 (IMA J Appl Math. 47, 193 (1991)). Although McNabb’s initial argument was motivated by considering the mean particle lifetime (MPLT) for a linear death process, he applied the ideas to study diffusion. We extend the work of these authors by deriving expressions for the MAT for a general one–dimensional linear advection–diffusion–reaction problem. Using a combination of continuum and discrete approaches, we show that MAT and MPLT are equivalent for certain uniform–to-uniform transitions; these results provide a practical interpretation for MAT, by directly linking the stochastic microscopic processes to a meaningful macroscopic timescale. We find that for more general transitions, the equivalence between MAT and MPLT does not hold. Unlike other critical time definitions, we show that it is possible to evaluate the MAT without solving the underlying partial differential equation (pde). This makes MAT a simple and attractive quantity for practical situations. Finally, our work explores the accuracy of certain approximations derived using the MAT, showing that useful approximations for nonlinear kinetic processes can be obtained, again without treating the governing pde directly.