977 resultados para Large Classes
Resumo:
In central neurons, monamine neurotransmitters are taken up and stored within two distinct classes of regulated secretory vesicles: small synaptic vesicles and large dense core vesicles (DCVs). Biochemical and pharmacological evidence has shown that this uptake is mediated by specific vesicular monamine transporters (VMATs). Recent molecular cloning techniques have identified the vesicular monoamine transporter (VMAT2) that is expressed in brain. This transporter determines the sites of intracellular storage of monoamines and has been implicated in both the modulation of normal monoaminergic neurotransmission and the pathogenesis of related neuropsychiatric disease. We used an antiserum against VMAT2 to examine its ultrastructural distribution in rat solitary tract nuclei, a region that contains a dense and heterogeneous population of monoaminergic neurons. We find that both immunoperoxidase and immunogold labeling for VMAT2 localize to DCVs and small synaptic vesicles in axon terminals, the trans-Golgi network of neuronal perikarya, tubulovesicles of smooth endoplasmic reticulum, and potential sites of vesicular membrane recycling. In axon terminals, immunogold labeling for VMAT2 was preferentially associated with DCVs at sites distant from typical synaptic junctions. The results provide direct evidence that a single VMAT is expressed in two morphologically distinct types of regulated secretory vesicles in central monoaminergic neurons.
Resumo:
It is often debated whether migraine with aura (MA) and migraine without aura (MO) are etiologically distinct disorders. A previous study using latent class analysis (LCA) in Australian twins showed no evidence for separate subtypes of MO and MA. The aim of the present study was to replicate these results in a population of Dutch twins and their parents, siblings and partners (N = 10,144). Latent class analysis of International Headache Society (IHS)-based migraine symptoms resulted in the identification of 4 classes: a class of unaffected subjects (class 0), a mild form of nonmigrainous headache (class 1), a moderately severe type of migraine (class 2), typically without neurological symptoms or aura (8% reporting aura symptoms), and a severe type of migraine (class 3), typically with neurological symptoms, and aura symptoms in approximately half of the cases. Given the overlap of neurological symptoms and nonmutual exclusivity of aura symptoms, these results do not support the MO and MA subtypes as being etiologically distinct. The heritability in female twins of migraine based on LCA classification was estimated at .50 (95% confidence intervals [0CI} .27 -.59), similar to IHS-based migraine diagnosis (h(2) = .49, 95% Cl .19-.57). However, using a dichotomous classification (affected-unaffected) decreased heritability for the IHS-based classification (h(2) = .33, 95% Cl .00-.60), but not the LCA-based classification (h(2) = .51, 95% Cl. 23-.61). Importantly, use of the LCA-based classification increased the number of subjects classified as affected. The heritability of the screening question was similar to more detailed LCA and IHS classifications, suggesting that the screening procedure is an important determining factor in genetic studies of migraine.
Resumo:
The effects of substance P (SP) on nicotinic acetylcholine (ACh)-evoked currents were investigated in parasympathetic neurons dissociated from neonatal rat intracardiac ganglia using standard whole cell, perforated patch, and outside-out recording configurations of the patch-clamp technique. Focal application of SP onto the soma reversibly decreased the peak amplitude of the ACh-evoked current with half-maximal inhibition occurring at 45 mu M and complete block at 300 mu M SP. Whole cell current-voltage (I-V) relationships obtained in the absence and presence of SP indicate that the block of ACh-evoked currents by SP is voltage independent. The rate of decay of ACh-evoked currents was increased sixfold in the presence of SP (100 mu M), suggesting that SP may increase the rate of receptor desensitization. SP-induced inhibition of ACh-evoked currents was observed following cell dialysis and in the presence of either 1 mM 8-Br-cAMP, a membrane-permeant cAMP analogue, 5 mu M H-7, a protein kinase C inhibitor, or 2 mM intracellular AMP-PNP, a nonhydrolyzable ATP analogue. These data suggest that a diffusible cytosolic second messenger is unlikely to mediate SP inhibition of neuronal nicotinic ACh receptor (nAChR) channels. Activation of nAChR channels in outside-out membrane patches by either ACh (3 mu M) or cytisine (3 mu M) indicates the presence of at least three distinct conductances (20, 35, and 47 pS) in rat intracardiac neurons. In the presence of 3 mu M SP, the large conductance nAChR channels are preferentially inhibited. The open probabilities of the large conductance classes activated by either ACh or cytisine were reversibly decreased by 10- to 30-fold in the presence of SP. The single-channel conductances were unchanged, and mean apparent channel open times for the large conductance nAChR channels only were slightly decreased by SP. Given that individual parasympathetic neurons of rat intracardiac ganglia express a heterogeneous population of nAChR subunits represented by the different conductance levels, SP appears to preferentially inhibit those combinations of nAChR subunits that form the large conductance nAChR channels. Since ACh is the principal neurotransmitter of extrinsic (vagal) innervation of the mammalian heart, SP may play an important role in modulating autonomic control of the heart.
Resumo:
The purpose of this study was to examine the effectiveness of peer counseling on high school students who have previously failed two or more classes in a nine week quarter.^ This study was constructed by comparing students who previously failed and were subsequently given peer counseling with a matched group of students who failed and did not receive peer counseling.^ To test the proposed research question, 324 students from a large urban school system were randomly chosen from a computer generated list of students who failed courses, matched on variables of number of classes failed, grade level and gender. One student from each matched pair was randomly placed in either the experimental or control group. 162 students from Group 1 (experimental) were assigned a peer counselor with their pair assigned to Group 2 (control). Group 1 received peer counseling at least 4 times during the third nine week academic quarter (Quarter 3) while Group 2 did not.^ The Grade Point Averages (GPA) for all students were collected both at the end of Quarter 2 and Quarter 3, at which time peer counseling was terminated. GPA's were also collected nine weeks after counseling was terminated.^ Results were determined by multiple regression, analysis of covariance and t-test. A level of significance was set at.05. There was significant increase in the GPA's of those counseled students immediately after peer counseling and also nine weeks after counseling was terminated, while the group not receiving peer counseling showed no increase. It was noted that there were significantly more school drop-outs from the non-counseled group than the counseled group. The number of classes failed, high school attended, grade level and gender were not found to be significant.^ The conclusion from this study was that peer counseling does impact significantly on the GPA's of students experiencing academic failure. Recommendations from this study were to implement and expand peer counseling programs with more failing students, continue counseling for longer than one quarter, include students in drop-out programs and students from different socio-economic and racial backgrounds, and conduct subsequent evaluation. ^
Resumo:
This study investigated the opinions regarding inclusion of parents of both disabled and nondisabled elementary children from a large suburban county. An opinion survey combining Wilczenski's Attitudes Toward Inclusive Education Scale with additional questions was distributed to 1170 children from 24 schools. Three research questions focused on differences between mean parental responses as they related to the inclusion and disability status of the parent's child. Results from the 270 respondents indicated that parents with disabled children had more favorable opinions about inclusion than did those with nondisabled children. Parents with included children were more favorable toward inclusion than were parents whose children were not included. Parents with included disabled children were more accepting of inclusion than were those with nondisabled children in inclusive settings. Parents' answers differed depending on the type of disability being included. Regardless of their child's disability or inclusion status, the ranking for disability types from most acceptable for inclusion to least acceptable were: social, sensory, motor, academic and behavioral. Results across types of questions, including questions relating to acceptance and general inclusion issues, indicated consistently more favorable opinions of parents with disabled children, included children and disabled children in inclusive classes. Two additional research questions examined parental responses as they related to demographic characteristics of the parents and of the schools their children attended. Analysis of Variance found only one significant main effect for any parental demographic variable. This difference was for the number of parents' elementary children when comparing parents with and without disabled children. The only significant main effects of demographics of schools the parents' children attended were for the area of the county and for schools with differing percentages of severely disabled students when comparing responses of parents with disabled and nondisabled children. For all research questions, tests indicated low effect sizes and moderate to high power levels. These results, and the fact that means for all groups were in the middle range of response choices, indicate that there may be little practical significance to the overall results. Further studies should investigate the trends found in this study. ^
Resumo:
Minimum Student Performance Standards in Computer Literacy and Science were passed by the Florida Legislature through the Educational Reform Act of 1983. This act mandated that all Florida high school graduates receive training in computer literacy. Schools and school systems were charged with the task of determining the best methods to deliver this instruction to their students. The scope of this study is to evaluate one school's response to the state of Florida's computer literacy mandate. The study was conducted at Miami Palmetto Senior High School, located in Dade County, Florida. The administration of Miami Palmetto Senior High School chose to develop and implement a new program to comply with the state mandate - integrating computer literacy into the existing biology curriculum. The study evaluated the curriculum to determine if computer literacy could be integrated successfully and meet both the biology and computer literacy objectives. The findings in this study showed that there were no significant differences between biology scores of the students taking the integrated curriculum and those taking a traditional curriculum of biology. Student in the integrated curriculum not only met the biology objectives as well as those in the traditional curriculum, they also successfully completed the intended objectives for computer literacy. Two sets of objectives were successfully completed in the integrated classes in the same amount of time used to complete one set of objectives in the traditional biology classes. Therefore, integrated curriculum was the more efficient means of meeting the intended objectives of both biology and computer literacy.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Compulsory education laws oblige primary and secondary schools to give each pupil positive encouragement in, for example, social, emotional, cognitive, creative, and ethical respects. This is a fairly smooth process for most pupils, but it is not as easy to achieve with others. A pattern of pupil, home or family, and school variables turns out to be responsible for a long-term process that may lead to a pupil’s dropping out of education. A systemic approach will do much to introduce more clarity into the diagnosis, potential reduction and possible prevention of some persistent educational problems that express themselves in related phenomena, for example low school motivation and achievement; forced underachievement of high ability pupils; concentration of bullying and violent behaviour in and around some types of classes and schools; and drop-out percentages that are relatively constant across time. Such problems have a negative effect on pupils, teachers, parents, schools, and society alike. In this address, I would therefore like to clarify some of the systemic causes and processes that we have identified between specific educational and pupil characteristics. Both theory and practice can assist in developing, implementing, and checking better learning methods and coaching procedures, particularly for pupils at risk. This development approach will take time and require co-ordination, but it will result in much better processes and outcomes than we are used to. First, I will diagnose some systemic aspects of education that do not seem to optimise the learning processes and school careers of some types of pupils in particular. Second, I will specify cognitive, social, motivational, and self-regulative aspects of learning tasks and relate corresponding learning processes to relevant instructional and wider educational contexts. I will elaborate these theoretical notions into an educational design with systemic instructional guidelines and multilevel procedures that may improve learning processes for different types of pupils. Internet-based Information and Communication Technology, or ICT, also plays a major role here. Third, I will report on concrete developments made in prototype research and trials. The development process concerns ICT-based differentiation of learning materials and procedures, and ICT-based strategies to improve pupil development and learning. Fourth, I will focus on the experience gained in primary and secondary educational practice with respect to implementation. We can learn much from such practical experience, in particular about the conditions for developing and implementing the necessary changes in and around schools. Finally, I will propose future research. As I hope to make clear, theory-based development and implementation research can join forces with systemic innovation and differentiated assessment in educational practice, to pave the way for optimal “learning for self-regulation” for pupils, teachers, parents, schools, and society at large.
Resumo:
In the context of ƒ (R) gravity theories, we show that the apparent mass of a neutron star as seen from an observer at infinity is numerically calculable but requires careful matching, first at the star’s edge, between interior and exterior solutions, none of them being totally Schwarzschild-like but presenting instead small oscillations of the curvature scalar R; and second at large radii, where the Newtonian potential is used to identify the mass of the neutron star. We find that for the same equation of state, this mass definition is always larger than its general relativistic counterpart. We exemplify this with quadratic R^2 and Hu-Sawicki-like modifications of the standard General Relativity action. Therefore, the finding of two-solar mass neutron stars basically imposes no constraint on stable ƒ (R) theories. However, star radii are in general smaller than in General Relativity, which can give an observational handle on such classes of models at the astrophysical level. Both larger masses and smaller matter radii are due to much of the apparent effective energy residing in the outer metric for scalar-tensor theories. Finally, because the ƒ (R) neutron star masses can be much larger than General Relativity counterparts, the total energy available for radiating gravitational waves could be of order several solar masses, and thus a merger of these stars constitutes an interesting wave source.
Resumo:
The influence of particles recycling on the geochemistry of sediments in a large tropical dam lake in the Amazonian region, Brazil. Article in Journal of South American Earth Sciences 72 · December 2016 DOI: 10.1016/j.jsames.2016.09.012 1st Rita Fonseca 16.85 · Universidade de Évora 2nd Catarina Pinho 3rd Manuela Oliveira 22.6 · Universidade de Évora Abstract As a result of over-erosion of soils, the fine particles, which contain the majority of nutrients, are easily washed away from soils, which become deficient in a host of components, accumulating in lakes. On one hand, the accumulation of nutrients-rich sediments are a problem, as they affect the quality of the overlying water and decrease the water storage capacity of the system; on the other hand, sediments may constitute an important resource, as they are often extremely rich in organic and inorganic nutrients in readily available forms. In the framework of an extensive work on the use of rock related materials to enhance the fertility of impoverish soils, this study aimed to evaluate the role on the nutrients cycle, of particles recycling processes from the watershed to the bottom of a large dam reservoir, at a wet tropical region under high weathering conditions. The study focus on the mineralogical transformations that clay particles undergo from the soils of the drainage basin to their final deposition within the reservoir and their influence in terms of the geochemical characteristics of sediments. We studied the bottom sediments that accumulate in two distinct seasonal periods in Tucuruí reservoir, located in the Amazonian Basin, Brazil, and soils from its drainage basin. The surface layers of sediments in twenty sampling points with variable depths, are representative of the different morphological sections of the reservoir. Nineteen soil samples, representing the main soil classes, were collected near the margins of the reservoir. Sediments and soils were subjected to the same array of physical, mineralogical and geochemical analyses: (1) texture, (2) characterization and semi-quantification of the clay fraction mineralogy and (3) geochemical analysis of the total concentration of major elements, organic compounds (organic C and nitrogen), soluble fractions of nutrients (P and K), exchangeable fractions (cation exchange capacity, exchangeable bases and acidity) and pH(H2O).
Resumo:
Assessment of central blood pressure (BP) has grown substantially over recent years because evidence has shown that central BP is more relevant to cardiovascular outcomes than peripheral BP. Thus, different classes of antihypertensive drugs have different effects on central BP despite similar reductions in brachial BP. The aim of this study was to investigate the effect of nebivolol, a β-blocker with vasodilator properties, on the biochemical and hemodynamic parameters of hypertensive patients. Experimental single cohort study conducted in the outpatient clinic of a university hospital. Twenty-six patients were recruited. All of them underwent biochemical and hemodynamic evaluation (BP, heart rate (HR), central BP and augmentation index) before and after 3 months of using nebivolol. 88.5% of the patients were male; their mean age was 49.7 ± 9.3 years and most of them were overweight (29.6 ± 3.1 kg/m2) with large abdominal waist (102.1 ± 7.2 cm). There were significant decreases in peripheral systolic BP (P = 0.0020), diastolic BP (P = 0.0049), HR (P < 0.0001) and central BP (129.9 ± 12.3 versus 122.3 ± 10.3 mmHg; P = 0.0083) after treatment, in comparison with the baseline values. There was no statistical difference in the augmentation index or in the biochemical parameters, from before to after the treatment. Nebivolol use seems to be associated with significant reduction of central BP in stage I hypertensive patients, in addition to reductions in brachial systolic and diastolic BP.
Resumo:
The aim of this study was to analyze the reasons for missed appointments in dental Family Health Units (FHU) and implement strategies to reduce same through action research. This is a study conducted in 12 FHUs in Piracicaba in the State of São Paulo from January, 1 to December, 31 2010. The sample was composed of 385 users of these health units who were interviewed over the phone and asked about the reasons for missing dental appointments, as well as 12 dentists and 12 nurses. Two workshops were staged with professionals: the first to assess the data collected in interviews and develop strategy, and the second for evaluation after 4 months. The primary cause for missed appointments was the opening hours of the units coinciding with the work schedule of the users. Among the strategies suggested were lectures on oral health, ongoing education in team meetings, training of Community Health Agents, participation in therapeutic groups and partnerships between Oral Health Teams and the social infrastructure of the community. The adoption of the single medical record was the strategy proposed by professionals. The strategies implemented led to a 66.6% reduction in missed appointments by the units and the motivating nature of the workshops elicited critical reflection to redirect health practices.
Resumo:
Spinocerebellar ataxia type 1 (SCA1), spinocerebellar ataxia type 2 (SCA2) and Machado-Joseph disease or spinocerebellar ataxia type 3 (MJD/SCA3) are three distinctive forms of autosomal dominant spinocerebellar ataxia (SCA) caused by expansions of an unstable CAG repeat localized in the coding region of the causative genes. Another related disease, dentatorubropallidoluysian atrophy (DRPLA) is also caused by an unstable triplet repeat and can present as SCA in late onset patients. We investigated the frequency of the SCA1, SCA2, MJD/SCA3 and DRPLA mutations in 328 Brazilian patients with SCA, belonging to 90 unrelated families with various patterns of inheritance and originating in different geographic regions of Brazil. We found mutations in 35 families (39%), 32 of them with a clear autosomal dominant inheritance. The frequency of the SCA1 mutation was 3% of all patients; and 6 % in the dominantly inherited SCAs. We identified the SCA2 mutation in 6% of all families and in 9% of the families with autosomal dominant inheritance. The MJD/SCA3 mutation was detected in 30 % of all patients; and in the 44% of the dominantly inherited cases. We found no DRPLA mutation. In addition, we observed variability in the frequency of the different mutations according to geographic origin of the patients, which is probably related to the distinct colonization of different parts of Brazil. These results suggest that SCA may be occasionally caused by the SCA1 and SCA2 mutations in the Brazilian population, and that the MJD/SCA3 mutation is the most common cause of dominantly inherited SCA in Brazil.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física