944 resultados para Transitive Inferences
Resumo:
OBJECTIVES: Three dental topography measurements: Dirichlet Normal Energy (DNE), Relief Index (RFI), and Orientation Patch Count Rotated (OPCR) are examined for their interaction with measures of wear, within and between upper and lower molars in Alouatta palliata. Potential inferences of the "dental sculpting" phenomenon are explored. MATERIALS AND METHODS: Fifteen occluding pairs of howling monkey first molars (15 upper, 15 lower) opportunistically collected from La Pacifica, Costa Rica, were selected to sample wear stages ranging from unworn to heavily worn as measured by the Dentine Exposure Ratio (DER). DNE, RFI, and OPCR were measured from three-dimensional surface reconstructions (PLY files) derived from high-resolution CT scans. Relationships among the variables were tested with regression analyses. RESULTS: Upper molars have more cutting edges, exhibiting significantly higher DNE, but have significantly lower RFI values. However, the relationships among the measures are concordant across both sets of molars. DER and EDJL are curvilinearly related. DER is positively correlated with DNE, negatively correlated with RFI, and uncorrelated with OPCR. EDJL is not correlated with DNE, or RFI, but is positively correlated with OPCR among lower molars only. DISCUSSION: The relationships among these metrics suggest that howling monkey teeth adaptively engage macrowear. DNE increases with wear in this sample presumably improving food breakdown. RFI is initially high but declines with wear, suggesting that the initially high RFI safeguards against dental senescence. OPCR values in howling monkey teeth do not show a clear relationship with wear changes.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
Background: It is well documented that children with Specific Language Impairment (SLI) experience significant grammatical deficits. While much of the focus in the past has been on their morphosyntactic difficulties, less is known about their acquisition of complex syntactic structures such as relative clauses. The role of memory in language performance has also become increasingly prominent in the literature. Aims: This study aims to investigate the control of an important complex syntactic structure, the relative clause, by school age children with SLI in Ireland, using a newly devised sentence recall task. It also aims to explore the role of verbal and short-termworking memory in the performance of children with SLI on the sentence recall task, using a standardized battery of tests based on Baddeley’s model of working memory. Methods and Procedures: Thirty two children with SLI, thirty two age matched typically developing children (AM-TD) between the ages of 6 and 7,11 years and twenty younger typically developing (YTD) children between 4,7 and 5 years, completed the task. The sentence recall (SR) task included 52 complex sentences and 17 fillers. It included relative clauses that are used in natural discourse and that reflect a developmental hierarchy. The relative clauses were also controlled for length and varied in syntactic complexity, representing the full range of syntactic roles. There were seven different relative clause types attached to either the predicate nominal of a copular clause (Pn), or to the direct object of a transitive clause (Do). Responses were recorded, transcribed and entered into a database for analysis. TheWorkingMemory Test Battery for children (WMTB-C—Pickering & Gathercole, 2001) was administered in order to explore the role of short-term memory and working memory on the children’s performance on the SR task. Outcomes and Results: The children with SLI showed significantly greater difficulty than the AM-TD group and the YTD group. With the exception of the genitive subject clauses, the children with SLI scored significantly higher on all sentences containing a Pn main clause than those containing a transitive main clause. Analysis of error types revealed the frequent production of a different type of relative clause than that presented in the task—with a strong word order preference in the NVN direction indicated for the children with SLI. The SR performance for the children with SLI was most highly correlated with expressive language skills and digit recall. Conclusions and Implications: Children with SLI have significantly greater difficulty with relative clauses than YTD children who are on average two years younger—relative clauses are a delay within a delay. Unlike the YTD children they show a tendency to simplify relative clauses in the noun verb noun (NVN) direction. They show a developmental hierarchy in their production of relative clause constructions and are highly influenced by the frequency distribution of the relative clauses in the ambient language.
Resumo:
English has been taught as a core and compulsory subject in China for decades. Recently, the demand for English in China has increased dramatically. China now has the world’s largest English-learning population. The traditional English-teaching method cannot continue to be the only approach because it merely focuses on reading, grammar and translation, which cannot meet English learners and users’ needs (i.e., communicative competence and skills in speaking and writing). This study was conducted to investigate if the Picture-Word Inductive Model (PWIM), a new pedagogical method using pictures and inductive thinking, would benefit English learners in China in terms of potential higher output in speaking and writing. With the gauge of Cognitive Load Theory (CLT), specifically, its redundancy effect, I investigated whether processing words and a picture concurrently would present a cognitive overload for English learners in China. I conducted a mixed methods research study. A quasi-experiment (pretest, intervention for seven weeks, and posttest) was conducted using 234 students in four groups in Lianyungang, China (58 fourth graders and 57 seventh graders as an experimental group with PWIM and 59 fourth graders and 60 seventh graders as a control group with the traditional method). No significant difference in the effects of PWIM was found on vocabulary acquisition based on grade levels. Observations, questionnaires with open-ended questions, and interviews were deployed to answer the three remaining research questions. A few students felt cognitively overloaded when they encountered too many writing samples, too many new words at one time, repeated words, mismatches between words and pictures, and so on. Many students listed and exemplified numerous strengths of PWIM, but a few mentioned weaknesses of PWIM. The students expressed the idea that PWIM had a positive effect on their English teaching. As integrated inferences, qualitative findings were used to explain the quantitative results that there were no significant differences of the effects of the PWIM between the experimental and control groups in both grade levels, from four contextual aspects: time constraints on PWIM implementation, teachers’ resistance, how to use PWIM and PWIM implemented in a classroom over 55 students.
Resumo:
The paleo-oceanography of the southeastern North Atlantic Ocean during the last 150,000 yr has been studied using biogenous and terrigenous components of hemipelagic sediments sampled close to the northwest African continental margin. Variations of oxygen isotope ratios in shells of benthic calcareous foraminifers in two cores allow the assignment of absolute ages to these cores (in the best case at 1000 yr increments). The uncorrected bulk sedimentation rates of the longest core range from 3.4 to 7.6 cm/ 1000 yr during Interglacial conditions, and from 6.5 to 9.9 cm/1000 yr during Glacial conditions; all other cores have given results of the same order of magnitude, but with generally increasing values towards the continental edge. The distribution of sediment components allow us to make inferences about paleo-oceanographic changes in this region. Frequencies of biogenic components from benthic organisms, oxygen isotope ratios measured in benthic calcareous foraminiferal shells, the total carbonate contents of the sediment and distributions of biogenic components from planktonic organisms often fluctuate in concert. However, all fluctuations which can be attributed to changes of the bottom water masses (North Atlantic Deep Water) seem to precede by several thousand years those which can be linked to changes of the surface water mass distributions or to changes of the climate over the neighboring land masses. Late Quaternary planktonic foraminiferal assemblages in the cores from the northwest African continental margin can be defined satisfactorily in the way that distributions of assemblages found in sediment surface samples from the northeast Atlantic Ocean have been explained. The distributions of assemblages in the northwest African cores can also be used to estimate past sea surface temperatures and salinities. The downcore record of these estimates reveals two warm periods during the last 150,000 yr, the lower one corresponding to the oxygen isotope stage 5 e (equivalent to the Eemian proper in Europe), the upper one to the younger half of the Holocene. Winter surface water temperatures during oxygen isotope stages 6, 4, 3, and 2 are remarkably constant in most cores, while summer sea surface temperatures during stage 3 reach values comparable to those of the warm periods during the Late Holocene and Eemian. Estimated winter sea surface temperatures range from > 16 °C to < 11°C, the summer sea surface temperatures from > 22 °C to < 15 °C during the last 150,000 yr. Estimates of the winter sea surface salinities fluctuate between 36.6? and 35.5?, the higher values being restricted to the warm periods since the penultimate Glacial. Estimates for sea surface temperatures and salinities for two cores from the center of today's coastal upwelling region show less pronounced fluctuations than the record of the open ocean cores in the case of the station 12379 off Cape Barbas, more pronounced in the case of station 12328 off Cape Blanc. Seasonal differences between winter and summer sea surface temperatures derived from the estimated temperatures are today more pronounced in the boundary region of the ocean to the continent than further away from the continent. The differences are generally higher during warm climatic periods of the last 150,000 yr than during cooler ones. The abundance of terrigenous grains in the coarse fractions generally decreases with increasing distance from the continental edge, and also from south to north. The dominant portion of the terrigenous detritus is carried out into the ocean during the relatively cool climatic periods (stage 6, 4, later part of stage 3, stage 2 and oldest part of stage 1). The enhanced precision of dating combined with the stratigraphic resolution of these high deposition rate cores make it clear that the peaks of the terrigenous input off this part of the northwest African continental margin occur simultaneously with times of rapid sea level fluctuations resulting from large volume changes of the large Glacial ice sheets.
Resumo:
The Jurassic (hemi)pelagic continental margin deposits drilled at Hole 547B, off the Moroccan coast, reveal striking Tethyan affinity. Analogies concern not only types and gross vertical evolution of facies, but also composition and textures of the fine sediment and the pattern of diagenetic alteration. In this context, the occurrence of the nanno-organism Schizosphaerella Deflandre and Dangeard (sometimes as a conspicuous portion of the fine-grained carbonate fraction) is of particular interest. Schizosphaerella, an incertae sedis taxon, has been widely recorded as a sediment contributor from Tethyan Jurassic deeper-water carbonate facies exposed on land. Because of its extremely long range (Hettangian to early Kimmeridgian), the genus Schizosphaerella (two species currently described, S. punctulata Deflandre and Dangeard and S. astrea Moshkovitz) is obviously not of great biostratigraphic interest. However, it is of interest in sedimentology and petrology. Specifically, Schizosphaerella was often the only component of the initial fine-grained fraction of a sediment that was able to resist diagenetic obliteration. However, alteration of the original skeletal structure did occur to various degrees. Crystal habit and mineralogy of the fundamental skeletal elements, as well as their mode of mutual arrangement in the test wall with the implied high initial porosity of the skeleton (60-70%), appear to be responsible for this outstanding resistance. Moreover, the ability to concentrate within and, in the case of the species S. punctulata, around the skeleton, large amounts of diagenetic calcite also contributed to the resistance. In both species of Schizosphaerella, occlusion of the original skeletal void space during diagenesis appears to have proceeded in an analogous manner, with an initial slight uniform syntaxial enlargement of the basic lamellar skeletal crystallites followed, upon mutual impingement, by uneven accretion of overgrowth cement in the remaining skeletal voids. However, distinctive fabrics are evident according to the different primary test wall architecture. In S. punctulata, intraskeletal cementation is usually followed by the growth of a radially structured crust of bladed to fibrous calcite around the valves. These crusts are interpreted as a product of aggrading neomorphism, associated with mineralogic stabilization of the original, presumably polyphase, sediment. Data from Hole 547B, along with inferences, drawn from the fabric relationships, suggest that the crusts formed and (inferentially) mineralogic stabilization occurred at a relatively early time in the diagenetic history in the shallow burial realm. An enhanced rate of lithification at relatively shallow burial depths and thus the chance for neomorphism to significantly influence the textural evolution of the buried sediment may be related to a lower Mg/Ca concentration ratio in the oceanic system and, hence, in marine pore waters in pre-Late Jurassic times.
Resumo:
The dominant processes determining biological structure in lakes at millennial timescales are complex. In this study, we used a multi-proxy approach to determine the relative importance of in-lake versus indirect processes on the Holocene development of an oligotrophic lake in SW Greenland (66.99°N, 50.97°W). A 14C and 210Pb-dated sediment core covering approximately 8500 years BP was analyzed for organic-inorganic carbon content, pigments, diatoms, chironomids, cladocerans, and stable isotopes (d13C, d18O). Relationships among the different proxies and a number of independent controlling variables (Holocene temperature, an isotope-inferred cooling period, and immigration of Betula nana into the catchment) were explored using redundancy analysis (RDA) independent of time. The main ecological trajectories in the lake biota were captured by ordination first axis sample scores (18-32% variance explained). The importance of the arrival of Betula (ca. 6500 years BP) into the catchment was indicated by a series of partial-constrained ordinations, uniquely explaining 12-17% of the variance in chironomids and up to 9% in pigments. Climate influences on lake biota were strongest during a short-lived cooling period (identified by altered stable isotopes) early in the development of the lake when all proxies changed rapidly, although only chironomids had a unique component (8% in a partial-RDA) explained by the cooling event. Holocene climate explained less variance than either catchment changes or biotic relationships. The sediment record at this site indicates the importance of catchment factors for lake development, the complexity of community trends even in relatively simple systems (invertebrates are the top predators in the lake) and the challenges of deriving palaeoclimate inferences from sediment records in low-Arctic freshwater lakes.
Resumo:
The hydrologic system beneath the Antarctic Ice Sheet is thought to influence both the dynamics and distribution of fast flowing ice streams, which discharge most of the ice lost by the ice sheet. Despite considerable interest in understanding this subglacial network and its affect on ice flow, in situ observations from the ice sheet bed are exceedingly rare. Here we describe the first sediment cores recovered from an active subglacial lake. The lake, known as Subglacial Lake Whillans, is part of a broader, dynamic hydrologic network beneath the Whillans Ice Stream in West Antarctica. Even though "floods" pass through the lake, the lake floor shows no evidence of erosion or deposition by flowing water. By inference, these floods must have insufficient energy to erode or transport significant volumes of sediment coarser than silt. Consequently, water flow beneath the region is probably incapable of incising continuous channels into the bed and instead follows preexisting subglacial topography and surface slope. Sediment on the lake floor consists of till deposited during intermittent grounding of the ice stream following flood events. The fabrics within the till are weaker than those thought to develop in thick deforming beds suggesting subglacial sediment fluxes across the ice plain are currently low and unlikely to have a large stabilizing effect on the ice stream's grounding zone.
Resumo:
Consumers are constantly making consumption decisions and engaging in marketplace activities that require some level of competence. In other words, consumers possess and require some knowledge, skills, and abilities to engage in the marketplace and obtain what they want. But what causes consumers to infer they are or are not competent? And, what are the consequences of these competence inferences on consumer behaviour? This dissertation examines the role consumption plays in consumers’ inferences of their own competence to enhance our understanding of these issues. By integrating the literature on competence and attributions of blame, this dissertation develops a theory for when and how consumption influences self-perceptions of competence and how these self-perceptions of competence impact future consumer behaviours. Evidence from five studies suggests that consumers infer their own competence from their consumption outcomes, despite who is actually responsible for causing these outcomes. This means consumers potentially see themselves as incompetent for negative outcomes that are entirely firm-caused. This dissertation argues that people infer their own competence from firm-caused outcomes because they conflate their decisions made prior to an outcome with the cause of that outcome. This dissertation also examines how these variations in self-perceptions of competence can influence future consumer behaviours.
Resumo:
Research on the mechanisms and processes underlying navigation has traditionally been limited by the practical problems of setting up and controlling navigation in a real-world setting. Thanks to advances in technology, a growing number of researchers are making use of computer-based virtual environments to draw inferences about real-world navigation. However, little research has been done on factors affecting human–computer interactions in navigation tasks. In this study female students completed a virtual route learning task and filled out a battery of questionnaires, which determined levels of computer experience, wayfinding anxiety, neuroticism, extraversion, psychoticism and immersive tendencies as well as their preference for a route or survey strategy. Scores on personality traits and individual differences were then correlated with the time taken to complete the navigation task, the length of path travelled,the velocity of the virtual walk and the number of errors. Navigation performance was significantly influenced by wayfinding anxiety, psychoticism, involvement and overall immersive tendencies and was improved in those participants who adopted a survey strategy. In other words, navigation in virtual environments is effected not only by navigational strategy, but also an individual’s personality, and other factors such as their level of experience with computers. An understanding of these differences is crucial before performance in virtual environments can be generalised to real-world navigational performance.
Resumo:
Previous studies about the strength of the lithosphere in the Iberia centre fail to resolve the depth of earthquakes because of the rheological uncertainties. Therefore, new contributions are considered (the crustal structure from a density model) and several parameters (tectonic regime, mantle rheology, strain rate) are checked in this paper to properly examine the role of lithospheric strength in the intraplate seismicity and the Cenozoic evolution. The strength distribution with depth, the integrated strength, the effective elastic thickness and the seismogenic thickness have been calculated by a finite element modelling of the lithosphere across the Central System mountain range and the bordering Duero and Madrid sedimentary basins. Only a dry mantle under strike-slip/extension and a strain rate of 10-15 s-1, or under extension and 10-16 s-1, causes a strong lithosphere. The integrated strength and the elastic thickness are lower in the mountain chain than in the basins. These anisotropies have been maintained since the Cenozoic and determine the mountain uplift and the biharmonic folding of the Iberian lithosphere during the Alpine deformations. The seismogenic thickness bounds the seismic activity in the upper–middle crust, and the decreasing crustal strength from the Duero Basin towards the Madrid Basin is related to a parallel increase in Plio–Quaternary deformations and seismicity. However, elasto–plastic modelling shows that current African–Eurasian convergence is resolved elastically or ductilely, which accounts for the low seismicity recorded in this region.
Resumo:
Students reflect more on their learning in course subjects when they participate in managing their teaching–learning environment. As a form of guided participation, peer assessment serves the following purposes: (a) it improves the student’s understanding of previously established learning objectives; (b) it is a powerful metacognitive tool; (c) it transfers to the student part of the responsibility for assessing learning, which means deciding which learning activities are important and choosing the degree of effort a course subject will require; (d) it emphasizes the collective aspect of the nature of knowledge; and (e) the educational benefits derived from peer assessment clearly justify the efforts required to implement activities. This paper reports on the relative merits of a learning portfolio compiled during fine arts-related studies in which peer assessment played an important role. The researchers analyzed the student work load and the final marks students received for compulsory art subjects. They conclude that the use of a closed learning portfolio with a well-structured, sequential and analytical design can have a positive effect on student learning and that, although implementing peer assessment may be complex and students need to become familiar with it, its use is not only feasible but recommendable.
Resumo:
The mental logic theory does not accept the disjunction introduction rule of standard propositional calculus as a natural schema of the human mind. In this way, the problem that I want to show in this paper is that, however, that theory does admit another much more complex schema in which the mentioned rule must be used as a previous step. So, I try to argue that this is a very important problem that the mental logic theory needs to solve, and claim that another rival theory, the mental models theory, does not have these difficulties.
Resumo:
We present a method for learning treewidth-bounded Bayesian networks from data sets containing thousands of variables. Bounding the treewidth of a Bayesian network greatly reduces the complexity of inferences. Yet, being a global property of the graph, it considerably increases the difficulty of the learning process. Our novel algorithm accomplishes this task, scaling both to large domains and to large treewidths. Our novel approach consistently outperforms the state of the art on experiments with up to thousands of variables.