982 resultados para Attention Problems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper focuses on the rapid development of the digital culture and the challenges it imposes to human creativity. It analyses e-learning, digital entertainment, digital art and the issues of creativity and improvisation. It also presents a classification of the levels in the creative structure including hardware and software tools; product developers; creators and end users. Special attention is paid to the advantages of the new digital culture and the responsibilities of all people who create it or use it. We conclude that more attention should be paid to the threats and to ways of boosting positive creativity in the various fields of application of information and communication technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Commonly used paradigms for studying child psychopathology emphasize individual-level factors and often neglect the role of context in shaping risk and protective factors among children, families, and communities. To address this gap, we evaluated influences of ecocultural contextual factors on definitions, development of, and responses to child behavior problems and examined how contextual knowledge can inform culturally responsive interventions. We drew on Super and Harkness' "developmental niche" framework to evaluate the influences of physical and social settings, childcare customs and practices, and parental ethnotheories on the definitions, development of, and responses to child behavior problems in a community in rural Nepal. Data were collected between February and October 2014 through in-depth interviews with a purposive sampling strategy targeting parents (N = 10), teachers (N = 6), and community leaders (N = 8) familiar with child-rearing. Results were supplemented by focus group discussions with children (N = 9) and teachers (N = 8), pile-sort interviews with mothers (N = 8) of school-aged children, and direct observations in homes, schools, and community spaces. Behavior problems were largely defined in light of parents' socialization goals and role expectations for children. Certain physical settings and times were seen to carry greater risk for problematic behavior when children were unsupervised. Parents and other adults attempted to mitigate behavior problems by supervising them and their social interactions, providing for their physical needs, educating them, and through a shared verbal reminding strategy (samjhaune). The findings of our study illustrate the transactional nature of behavior problem development that involves context-specific goals, roles, and concerns that are likely to affect adults' interpretations and responses to children's behavior. Ultimately, employing a developmental niche framework will elucidate setting-specific risk and protective factors for culturally compelling intervention strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Requirements Engineering (RE) has received much attention in research and practice due to its importance to software project success. Its inter-disciplinary nature, the dependency to the customer, and its inherent uncertainty still render the discipline diffcult to investigate. This results in a lack of empirical data. These are necessary, however, to demonstrate which practically relevant RE problems exist and to what extent they matter. Motivated by this situation, we initiated the Naming the Pain in Requirements Engineering (NaPiRE) initiative which constitutes a globally distributed, bi-yearly replicated family of surveys on the status quo and problems in practical RE.

In this article, we report on the analysis of data obtained from 228 companies in 10 countries. We apply Grounded Theory to the data obtained from NaPiRE and reveal which contemporary problems practitioners encounter. To this end, we analyse 21 problems derived from the literature with respect to their relevance and criticality in dependency to their context, and we complement this picture with a cause-effect analysis showing the causes and effects surrounding the most critical problems.

Our results give us a better understanding of which problems exist and how they manifest themselves in practical environments. Thus, we provide a rst step to ground contributions to RE on empirical observations which, by now, were dominated by conventional wisdom only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On the one hand it has been advanced that remnant movement (RM) serves as a replacement for head movement and leads to certain permutations in word order while it disallows some others (e.g. Cinque (2005)), on the other hand, little attention has been devoted to the consequences RM has for clausal syntax. In this work, I illustrate one such consequence, namely the rise of crossing and nesting movement dependencies and their reflexes. In particular, I make a case for the existence of massive RM that involves entire clausal subtrees in Polish. The analysis provides a uniform solution to three robust puzzles in the Polish OVS construction in a straightforward way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the hydrodynamic stability problem associated with the incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the eigenvalue problem in channel and pipe geometries. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to eigenvalue/stability problems. The underlying analysis consists of constructing both a dual eigenvalue problem and a dual problem for the original base solution. In this way, errors stemming from both the numerical approximation of the original nonlinear flow problem, as well as the underlying linear eigenvalue problem are correctly controlled. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the bifurcation problem associated with the steady incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the critical Reynolds number at which a steady pitchfork or Hopf bifurcation occurs when the underlying physical system possesses reflectional or Z_2 symmetry. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to bifurcation problems. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Congenital mirror movement disorder designates involuntary movements on one side of the body that occur as mirror of the intentional movements on the contralateral side. Colpocephaly is described as persistence of fetal configuration of lateral ventricles. Case Presentation: A two-month old male infant was brought to the hospital due to bilateral identical movements of the hands. Except for bilateral involuntary synkinetic imitative movements in hands, neurological and physical examination was normal. Cranial MRI showed corpus callosum dysgenesis, hypogenesis and dilation of bilateral lateral ventricular posterior horns (colpocephaly). At the age of 7 years, he was started to use metylphenydate to mitigate attention deficit and hyperactivity disorder. The mirror movements were decreasing in amplitude by years and were not so serious to affect normal life activities. Conclusions: Mirror movements, diagnosed usually during childhood, may be congenital or secondary to neurological diseases. Although they generally do not affect normal life activities, in some cases severity of mirror movements causes a real debilitating disease. In our case the patient was diagnosed at the age of 2 months and on follow-up no debilitating problems were observed. This is the first case to describe the association of colpocephaly and mirror movements. The exact mechanism of this association is not known. Although it is known that mirror movements may be in relation with some pychiatric pathologies, this is the first report of attention deficit and hyperactivity disorder in conjunction with mirror movements and/or colpocephaly. Managing comorbidities, either physical or psyhchological, will help the patient to live in good health without trying to cope with other pathological diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Massive Open Online Courses (MOOCs) may be considered to be a new form of virtual technology enhanced learning environments. Since their first appearance in 2008, the increase in the number of MOOCs has been dramatic. The hype about MOOCs was accompanied by great expectations: 2012 was named the Year of the MOOCs and it was expected that MOOCs would revolutionise higher education. Two types of MOOCs may be distinguished: cMOOCs as proposed by Siemens, based on his ideas of connectivism, and xMOOCs developed in institutions such as Stanford and MIT. Although MOOCs have received a great deal of attention, they have also met with criticism. The time has therefore come to critically reflect upon this phenomenon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.2 ± 2.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wild animals have been kept as pets for centuries, in Brazil companionship is one of the main reasons why wild species are legally bred and traded. This paper is an attempt to call the attention for problems concerning the welfare of wild pets involved in the trading system in Brazil. Some issues presented are: a) the significant increase in the number of wildlife breeders and traders and the difficulties faced by of the Brazilian government in controlling this activity; b) the main welfare issues faced by breeders and owners of wild pets; and c) the destination of wild pets no longer wanted. Finally, some recommendations are made having the welfare of the animals as a priority.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Substantial complexity has been introduced into treatment regimens for patients with human immunodeficiency virus (HIV) infection. Many drug-related problems (DRPs) are detected in these patients, such as low adherence, therapeutic inefficacy, and safety issues. We evaluated the impact of pharmacist interventions on CD4+ T-lymphocyte count, HIV viral load, and DRPs in patients with HIV infection. In this 18-month prospective controlled study, 90 outpatients were selected by convenience sampling from the Hospital Dia-University of Campinas Teaching Hospital (Brazil). Forty-five patients comprised the pharmacist intervention group and 45 the control group; all patients had HIV infection with or without acquired immunodeficiency syndrome. Pharmaceutical appointments were conducted based on the Pharmacotherapy Workup method, although DRPs and pharmacist intervention classifications were modified for applicability to institutional service limitations and research requirements. Pharmacist interventions were performed immediately after detection of DRPs. The main outcome measures were DRPs, CD4+ T-lymphocyte count, and HIV viral load. After pharmacist intervention, DRPs decreased from 5.2 (95% confidence interval [CI] =4.1-6.2) to 4.2 (95% CI =3.3-5.1) per patient (P=0.043). A total of 122 pharmacist interventions were proposed, with an average of 2.7 interventions per patient. All the pharmacist interventions were accepted by physicians, and among patients, the interventions were well accepted during the appointments, but compliance with the interventions was not measured. A statistically significant increase in CD4+ T-lymphocyte count in the intervention group was found (260.7 cells/mm(3) [95% CI =175.8-345.6] to 312.0 cells/mm(3) [95% CI =23.5-40.6], P=0.015), which was not observed in the control group. There was no statistical difference between the groups regarding HIV viral load. This study suggests that pharmacist interventions in patients with HIV infection can cause an increase in CD4+ T-lymphocyte counts and a decrease in DRPs, demonstrating the importance of an optimal pharmaceutical care plan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Basilar invagination (BI) is a congenital craniocervical junction (CCJ) anomaly represented by a prolapsed spine into the skull-base that can result in severe neurological impairment. In this paper, we retrospective evaluate the surgical treatment of 26 patients surgically treated for symptomatic BI. BI was classified according to instability and neural abnormalities findings. Clinical outcome was evaluated using the Nürick grade system. A total of 26 patients were included in this paper. Their age ranged from 15 to 67 years old (mean 38). Of which, 10 patients were male (38%) and 16 (62%) were female. All patients had some degree of tonsillar herniation, with 25 patients treated with foramen magnum decompression. Nine patients required a craniocervical fixation. Six patients had undergone prior surgery and required a new surgical procedure for progression of neurological symptoms associated with new compression or instability. Most of patients with neurological symptoms secondary to brainstem compression had some improvement during the follow-up. There was mortality in this series, 1 month after surgery, associated with a late removal of the tracheal cannula. Management of BI requires can provide improvements in neurological outcomes, but requires analysis of the neural and bony anatomy of the CCJ, as well as occult instability. The complexity and heterogeneous presentation requires attention to occult instability on examination and attention to airway problems secondary to concomitant facial malformations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The scope of this study is to identify the prevalence of access to information about how to prevent oral problems among schoolchildren in the public school network, as well as the factors associated with such access. This is a cross-sectional and analytical study conducted among 12-year-old schoolchildren in a Brazilian municipality with a large population. The examinations were performed by 24 trained dentists and calibrated with the aid of 24 recorders. Data collection occurred in 36 public schools selected from the 89 public schools of the city. Descriptive, univariate and multiple analyses were conducted. Of the 2510 schoolchildren included in the study, 2211 reported having received information about how to prevent oral problems. Access to such information was greater among those who used private dental services; and lower among those who used the service for treatment, who evaluated the service as regular or bad/awful. The latter use toothbrush only or toothbrush and tongue scrubbing as a means of oral hygiene and who reported not being satisfied with the appearance of their teeth. The conclusion drawn is that the majority of schoolchildren had access to information about how to prevent oral problems, though access was associated with the characteristics of health services, health behavior and outcomes.