862 resultados para Topological data analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Attention Deficit Hyperactivity Disorder (ADHD) is one the most prevalent of childhood diagnoses. There is limited research available from the perspective of the child or young person with ADHD. The current research explored how young people perceive ADHD. A secondary aim of the study was to explore to what extent they identify with ADHD. Five participants took part in this study. Their views were explored using semi-structured interviews guided by methods from Personal Construct Psychology. The data was analysed using Interpretative Phenomenological Analysis (IPA). Data analysis suggests that the young people’s views of ADHD are complex and, at times, contradictory. Four super-ordinate themes were identified: What is ADHD?, The role and impact of others on the experience of ADHD, Identity conflict and My relationship with ADHD. The young people’s contradictory views on ADHD are reflective of portrayals of ADHD in the media. A power imbalance was also identified where the young people perceive that they play a passive role in the management of their treatment. Finally, the young people’s accounts revealed a variety of approaches taken to make sense of their condition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization–maximization (MM) algorithm with a Nelder–Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The protein lysate array is an emerging technology for quantifying the protein concentration ratios in multiple biological samples. It is gaining popularity, and has the potential to answer questions about post-translational modifications and protein pathway relationships. Statistical inference for a parametric quantification procedure has been inadequately addressed in the literature, mainly due to two challenges: the increasing dimension of the parameter space and the need to account for dependence in the data. Each chapter of this thesis addresses one of these issues. In Chapter 1, an introduction to the protein lysate array quantification is presented, followed by the motivations and goals for this thesis work. In Chapter 2, we develop a multi-step procedure for the Sigmoidal models, ensuring consistent estimation of the concentration level with full asymptotic efficiency. The results obtained in this chapter justify inferential procedures based on large-sample approximations. Simulation studies and real data analysis are used to illustrate the performance of the proposed method in finite-samples. The multi-step procedure is simpler in both theory and computation than the single-step least squares method that has been used in current practice. In Chapter 3, we introduce a new model to account for the dependence structure of the errors by a nonlinear mixed effects model. We consider a method to approximate the maximum likelihood estimator of all the parameters. Using the simulation studies on various error structures, we show that for data with non-i.i.d. errors the proposed method leads to more accurate estimates and better confidence intervals than the existing single-step least squares method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Datacenters have emerged as the dominant form of computing infrastructure over the last two decades. The tremendous increase in the requirements of data analysis has led to a proportional increase in power consumption and datacenters are now one of the fastest growing electricity consumers in the United States. Another rising concern is the loss of throughput due to network congestion. Scheduling models that do not explicitly account for data placement may lead to a transfer of large amounts of data over the network causing unacceptable delays. In this dissertation, we study different scheduling models that are inspired by the dual objectives of minimizing energy costs and network congestion in a datacenter. As datacenters are equipped to handle peak workloads, the average server utilization in most datacenters is very low. As a result, one can achieve huge energy savings by selectively shutting down machines when demand is low. In this dissertation, we introduce the network-aware machine activation problem to find a schedule that simultaneously minimizes the number of machines necessary and the congestion incurred in the network. Our model significantly generalizes well-studied combinatorial optimization problems such as hard-capacitated hypergraph covering and is thus strongly NP-hard. As a result, we focus on finding good approximation algorithms. Data-parallel computation frameworks such as MapReduce have popularized the design of applications that require a large amount of communication between different machines. Efficient scheduling of these communication demands is essential to guarantee efficient execution of the different applications. In the second part of the thesis, we study the approximability of the co-flow scheduling problem that has been recently introduced to capture these application-level demands. Finally, we also study the question, "In what order should one process jobs?'' Often, precedence constraints specify a partial order over the set of jobs and the objective is to find suitable schedules that satisfy the partial order. However, in the presence of hard deadline constraints, it may be impossible to find a schedule that satisfies all precedence constraints. In this thesis we formalize different variants of job scheduling with soft precedence constraints and conduct the first systematic study of these problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation research points out major challenging problems with current Knowledge Organization (KO) systems, such as subject gateways or web directories: (1) the current systems use traditional knowledge organization systems based on controlled vocabulary which is not very well suited to web resources, and (2) information is organized by professionals not by users, which means it does not reflect intuitively and instantaneously expressed users’ current needs. In order to explore users’ needs, I examined social tags which are user-generated uncontrolled vocabulary. As investment in professionally-developed subject gateways and web directories diminishes (support for both BUBL and Intute, examined in this study, is being discontinued), understanding characteristics of social tagging becomes even more critical. Several researchers have discussed social tagging behavior and its usefulness for classification or retrieval; however, further research is needed to qualitatively and quantitatively investigate social tagging in order to verify its quality and benefit. This research particularly examined the indexing consistency of social tagging in comparison to professional indexing to examine the quality and efficacy of tagging. The data analysis was divided into three phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of tag attributes. Most indexing consistency studies have been conducted with a small number of professional indexers, and they tended to exclude users. Furthermore, the studies mainly have focused on physical library collections. This dissertation research bridged these gaps by (1) extending the scope of resources to various web documents indexed by users and (2) employing the Information Retrieval (IR) Vector Space Model (VSM) - based indexing consistency method since it is suitable for dealing with a large number of indexers. As a second phase, an analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. Finally, to investigate tagging pattern and behaviors, a content analysis on tag attributes was conducted based on the FRBR model. The findings revealed that there was greater consistency over all subjects among taggers compared to that for two groups of professionals. The analysis of tagging exhaustivity and tag specificity in relation to tagging effectiveness was conducted to ameliorate difficulties associated with limitations in the analysis of indexing consistency based on only the quantitative measures of vocabulary matching. Examination of exhaustivity and specificity of social tags provided insights into particular characteristics of tagging behavior and its variation across subjects. To further investigate the quality of tags, a Latent Semantic Analysis (LSA) was conducted to determine to what extent tags are conceptually related to professionals’ keywords and it was found that tags of higher specificity tended to have a higher semantic relatedness to professionals’ keywords. This leads to the conclusion that the term’s power as a differentiator is related to its semantic relatedness to documents. The findings on tag attributes identified the important bibliographic attributes of tags beyond describing subjects or topics of a document. The findings also showed that tags have essential attributes matching those defined in FRBR. Furthermore, in terms of specific subject areas, the findings originally identified that taggers exhibited different tagging behaviors representing distinctive features and tendencies on web documents characterizing digital heterogeneous media resources. These results have led to the conclusion that there should be an increased awareness of diverse user needs by subject in order to improve metadata in practical applications. This dissertation research is the first necessary step to utilize social tagging in digital information organization by verifying the quality and efficacy of social tagging. This dissertation research combined both quantitative (statistics) and qualitative (content analysis using FRBR) approaches to vocabulary analysis of tags which provided a more complete examination of the quality of tags. Through the detailed analysis of tag properties undertaken in this dissertation, we have a clearer understanding of the extent to which social tagging can be used to replace (and in some cases to improve upon) professional indexing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Instituto de Geociências, 2016.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An overview is given of a user interaction monitoring and analysis framework called BaranC. Monitoring and analysing human-digital interaction is an essential part of developing a user model as the basis for investigating user experience. The primary human-digital interaction, such as on a laptop or smartphone, is best understood and modelled in the wider context of the user and their environment. The BaranC framework provides monitoring and analysis capabilities that not only records all user interaction with a digital device (e.g. smartphone), but also collects all available context data (such as from sensors in the digital device itself, a fitness band or a smart appliances). The data collected by BaranC is recorded as a User Digital Imprint (UDI) which is, in effect, the user model and provides the basis for data analysis. BaranC provides functionality that is useful for user experience studies, user interface design evaluation, and providing user assistance services. An important concern for personal data is privacy, and the framework gives the user full control over the monitoring, storing and sharing of their data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Participatory evaluation and participatory action research (PAR) are increasingly used in community-based programs and initiatives and there is a growing acknowledgement of their value. These methodologies focus more on knowledge generated and constructed through lived experience than through social science (Vanderplaat 1995). The scientific ideal of objectivity is usually rejected in favour of a holistic approach that acknowledges and takes into account the diverse perspectives, values and interpretations of participants and evaluation professionals. However, evaluation rigour need not be lost in this approach. Increasing the rigour and trustworthiness of participatory evaluations and PAR increases the likelihood that results are seen as credible and are used to continually improve programs and policies.----- Drawing on learnings and critical reflections about the use of feminist and participatory forms of evaluation and PAR over a 10-year period, significant sources of rigour identified include:----- • participation and communication methods that develop relations of mutual trust and open communication----- • using multiple theories and methodologies, multiple sources of data, and multiple methods of data collection----- • ongoing meta-evaluation and critical reflection----- • critically assessing the intended and unintended impacts of evaluations, using relevant theoretical models----- • using rigorous data analysis and reporting processes----- • participant reviews of evaluation case studies, impact assessments and reports.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There exists a general consensus in the science education literature around the goal of enhancing students. and teachers. views of nature of science (NOS). An emerging area of research in science education explores NOS and argumentation, and the aim of this study was to explore the effectiveness of a science content course incorporating explicit NOS and argumentation instruction on preservice primary teachers. views of NOS. A constructivist perspective guided the study, and the research strategy employed was case study research. Five preservice primary teachers were selected for intensive investigation in the study, which incorporated explicit NOS and argumentation instruction, and utilised scientific and socioscientific contexts for argumentation to provide opportunities for participants to apply their NOS understandings to their arguments. Four primary sources of data were used to provide evidence for the interpretations, recommendations, and implications that emerged from the study. These data sources included questionnaires and surveys, interviews, audio- and video-taped class sessions, and written artefacts. Data analysis involved the formation of various assertions that informed the major findings of the study, and a variety of validity and ethical protocols were considered during the analysis to ensure the findings and interpretations emerging from the data were valid. Results indicated that the science content course was effective in enabling four of the five participants. views of NOS to be changed. All of the participants expressed predominantly limited views of the majority of the examined NOS aspects at the commencement of the study. Many positive changes were evident at the end of the study with four of the five participants expressing partially informed and/or informed views of the majority of the examined NOS aspects. A critical analysis of the effectiveness of the various course components designed to facilitate the development of participants‟ views of NOS in the study, led to the identification of three factors that mediated the development of participants‟ NOS views: (a) contextual factors (including context of argumentation, and mode of argumentation), (b) task-specific factors (including argumentation scaffolds, epistemological probes, and consideration of alternative data and explanations), and (c) personal factors (including perceived previous knowledge about NOS, appreciation of the importance and utility value of NOS, and durability and persistence of pre-existing beliefs). A consideration of the above factors informs recommendations for future studies that seek to incorporate explicit NOS and argumentation instruction as a context for learning about NOS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rising problems associated with construction such as decreasing quality and productivity, labour shortages, occupational safety, and inferior working conditions have opened the possibility of more revolutionary solutions within the industry. One prospective option is in the implementation of innovative technologies such as automation and robotics, which has the potential to improve the industry in terms of productivity, safety and quality. The construction work site could, theoretically, be contained in a safer environment, with more efficient execution of the work, greater consistency of the outcome and higher level of control over the production process. By identifying the barriers to construction automation and robotics implementation in construction, and investigating ways in which to overcome them, contributions could be made in terms of better understanding and facilitating, where relevant, greater use of these technologies in the construction industry so as to promote its efficiency. This research aims to ascertain and explain the barriers to construction automation and robotics implementation by exploring and establishing the relationship between characteristics of the construction industry and attributes of existing construction automation and robotics technologies to level of usage and implementation in three selected countries; Japan, Australia and Malaysia. These three countries were chosen as their construction industry characteristics provide contrast in terms of culture, gross domestic product, technology application, organisational structure and labour policies. This research uses a mixed method approach of gathering data, both quantitative and qualitative, by employing a questionnaire survey and an interview schedule; using a wide range of sample from management through to on-site users, working in a range of small (less than AUD0.2million) to large companies (more than AUD500million), and involved in a broad range of business types and construction sectors. Detailed quantitative (statistical) and qualitative (content) data analysis is performed to provide a set of descriptions, relationships, and differences. The statistical tests selected for use include cross-tabulations, bivariate and multivariate analysis for investigating possible relationships between variables; and Kruskal-Wallis and Mann Whitney U test of independent samples for hypothesis testing and inferring the research sample to the construction industry population. Findings and conclusions arising from the research work which include the ranking schemes produced for four key areas of, the construction attributes on level of usage; barrier variables; differing levels of usage between countries; and future trends, have established a number of potential areas that could impact the level of implementation both globally and for individual countries.