930 resultados para Multidimensional data analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Solidary Economy is an area that has shown unusual traits to what is preached in the traditional economic organizations, even organizations that have very similar principles, as some cooperatives. This trait is approaching the concept of isonomy proposed by Ramos (1989). Given this context, and the notion that the isonomy is like a ideal type, the objective this work was to evidence particulars of isonomic environment the in economic and solidarity experiences, taking as an empirical research area the Grupo de Mulheres Decididas a Vencer, considered a solidary economic enterprise. For this, we used the descriptive-exploratory research of qualitative nature, where the object of such research is the know enterprise, therefore, also characterized as a case study, which were taken as research subjects six associates, they being the most active in the enterprise. From the five categories that characterize isonomy - minimum standards prescribing, self-gratifying activity, activities undertaken as a vocation, wide system of making decision and primaries interpersonal relations - and from the traits of a solidary economic enterprise the data analysis was built, through content analysis, specifically the categorial analysis. Given this context and reality in which it is Grupo de Mulheres Decididas a Vencer, with minimal rules and procedures for conducting activities, comparing them to a therapy, women choosing to insert in that environment, faced with a democratic space and unfettered bureaucracy in professional interpersonal relationships, in others words, an organizational space where they were shown signs of substantive rationality was possible to conclude that the Group will share experiences and characteristics of isonomy. This disclosure meets the multidimensional social that presupposes Paraecomomic Paradigm, enabling man to enter in different social environments of the economy in order to search for self-actualization

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Attention Deficit Hyperactivity Disorder (ADHD) is one the most prevalent of childhood diagnoses. There is limited research available from the perspective of the child or young person with ADHD. The current research explored how young people perceive ADHD. A secondary aim of the study was to explore to what extent they identify with ADHD. Five participants took part in this study. Their views were explored using semi-structured interviews guided by methods from Personal Construct Psychology. The data was analysed using Interpretative Phenomenological Analysis (IPA). Data analysis suggests that the young people’s views of ADHD are complex and, at times, contradictory. Four super-ordinate themes were identified: What is ADHD?, The role and impact of others on the experience of ADHD, Identity conflict and My relationship with ADHD. The young people’s contradictory views on ADHD are reflective of portrayals of ADHD in the media. A power imbalance was also identified where the young people perceive that they play a passive role in the management of their treatment. Finally, the young people’s accounts revealed a variety of approaches taken to make sense of their condition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Na presença da estomia, o idoso pode apresentar maior grau de complicações e dificuldades no processo de adaptação, talvez porque a estomia envolva significados que dizem respeito à auto-imagem e à presença/aumento de dependência. Resgatar a necessidade do autocuidado do idoso em relação à saúde é fundamental para que, no cuidado, se obtenha sua participação e mobilização diária quanto à formulação do processo educativo em saúde. Este estudo tem como objetivos identificar as características do idoso estomizado, atendido em um serviço de estomaterapia e propor uma gerontotecnologia educativa que venha a contribuir no cuidado de idosos estomizados, à luz da Complexidade, de Edgar Morin. Foi realizada uma pesquisa, com abordagem qualitativa, do tipo estudo de caso, tendo como local um serviço de estomaterapia, o qual possibilitou contato com fichas cadastrais e via telefone com os sujeitos e, depois, com o domicílio do idoso, no Rio Grande, Rio Grande do Sul, Brasil. Os sujeitos do estudo totalizaram 4 (quatro) idosos, sendo três mulheres e um homem. Foram utilizados, como instrumento, o formulário e o ecomapa, e como técnicas, a entrevista, a observação assistemática e a gravação. Foram respeitadas as normas éticas. A análise dos dados deu-se por: 1) leitura exaustiva dos dados; 2) apresentação dos casos e seus ecomapas tendo como suporte a Complexidade de Edgar Morin; 3) redescoberta de conceitos, a priori, que ilustraram a visão do idoso com estomia: ser humano idoso estomizado complexo, saúde complexa do idoso estomizado, cuidado complexo ao idoso estomizado e sua família. Por fim, elaborou-se uma cartilha educativa, junto ao idoso, como facilitadora para o autocuidado. Quanto aos resultados, verificou-se que cada idoso estomizado está permeado por situações pontuais e, a partir disso, percebeu-se diferentes concepções recursivas e formas de enfrentamento na adaptação. A aceitação da mudança corporal e psicológica mostrou-se mais fácil, quando se verificou apoio familiar, instrução técnica anterior ou presença de pessoas conhecidas. O enfrentamento da doença e a possibilidade da morte são aspectos presentes para os idosos e que limitam suas atividades de vida diária. Assim, surgiram conceitos que contemplaram o ser humano em sua totalidade, com incertezas e significados. A visão que transcende a complexidade da estomia engloba um cuidado que busca integrar a família do idoso com estomia, estimulando-o ao autocuidado e ao acolhimento diário, reforçando a auto-estima, como estratégia de recomeço em meio a desafios permanentes: ser idoso e ter uma estomia. Lançar um novo olhar sobre a temática idoso estomizado é complexo, exigindo abordagem multidimensional das características que o envolvem. Acreditase que mudanças favoráveis poderão advir após implementação de um programa de educação em saúde que contemple a singularidade das questões que envolvem o idoso estomizado e suas necessidades.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization–maximization (MM) algorithm with a Nelder–Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The protein lysate array is an emerging technology for quantifying the protein concentration ratios in multiple biological samples. It is gaining popularity, and has the potential to answer questions about post-translational modifications and protein pathway relationships. Statistical inference for a parametric quantification procedure has been inadequately addressed in the literature, mainly due to two challenges: the increasing dimension of the parameter space and the need to account for dependence in the data. Each chapter of this thesis addresses one of these issues. In Chapter 1, an introduction to the protein lysate array quantification is presented, followed by the motivations and goals for this thesis work. In Chapter 2, we develop a multi-step procedure for the Sigmoidal models, ensuring consistent estimation of the concentration level with full asymptotic efficiency. The results obtained in this chapter justify inferential procedures based on large-sample approximations. Simulation studies and real data analysis are used to illustrate the performance of the proposed method in finite-samples. The multi-step procedure is simpler in both theory and computation than the single-step least squares method that has been used in current practice. In Chapter 3, we introduce a new model to account for the dependence structure of the errors by a nonlinear mixed effects model. We consider a method to approximate the maximum likelihood estimator of all the parameters. Using the simulation studies on various error structures, we show that for data with non-i.i.d. errors the proposed method leads to more accurate estimates and better confidence intervals than the existing single-step least squares method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Datacenters have emerged as the dominant form of computing infrastructure over the last two decades. The tremendous increase in the requirements of data analysis has led to a proportional increase in power consumption and datacenters are now one of the fastest growing electricity consumers in the United States. Another rising concern is the loss of throughput due to network congestion. Scheduling models that do not explicitly account for data placement may lead to a transfer of large amounts of data over the network causing unacceptable delays. In this dissertation, we study different scheduling models that are inspired by the dual objectives of minimizing energy costs and network congestion in a datacenter. As datacenters are equipped to handle peak workloads, the average server utilization in most datacenters is very low. As a result, one can achieve huge energy savings by selectively shutting down machines when demand is low. In this dissertation, we introduce the network-aware machine activation problem to find a schedule that simultaneously minimizes the number of machines necessary and the congestion incurred in the network. Our model significantly generalizes well-studied combinatorial optimization problems such as hard-capacitated hypergraph covering and is thus strongly NP-hard. As a result, we focus on finding good approximation algorithms. Data-parallel computation frameworks such as MapReduce have popularized the design of applications that require a large amount of communication between different machines. Efficient scheduling of these communication demands is essential to guarantee efficient execution of the different applications. In the second part of the thesis, we study the approximability of the co-flow scheduling problem that has been recently introduced to capture these application-level demands. Finally, we also study the question, "In what order should one process jobs?'' Often, precedence constraints specify a partial order over the set of jobs and the objective is to find suitable schedules that satisfy the partial order. However, in the presence of hard deadline constraints, it may be impossible to find a schedule that satisfies all precedence constraints. In this thesis we formalize different variants of job scheduling with soft precedence constraints and conduct the first systematic study of these problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation research points out major challenging problems with current Knowledge Organization (KO) systems, such as subject gateways or web directories: (1) the current systems use traditional knowledge organization systems based on controlled vocabulary which is not very well suited to web resources, and (2) information is organized by professionals not by users, which means it does not reflect intuitively and instantaneously expressed users’ current needs. In order to explore users’ needs, I examined social tags which are user-generated uncontrolled vocabulary. As investment in professionally-developed subject gateways and web directories diminishes (support for both BUBL and Intute, examined in this study, is being discontinued), understanding characteristics of social tagging becomes even more critical. Several researchers have discussed social tagging behavior and its usefulness for classification or retrieval; however, further research is needed to qualitatively and quantitatively investigate social tagging in order to verify its quality and benefit. This research particularly examined the indexing consistency of social tagging in comparison to professional indexing to examine the quality and efficacy of tagging. The data analysis was divided into three phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of tag attributes. Most indexing consistency studies have been conducted with a small number of professional indexers, and they tended to exclude users. Furthermore, the studies mainly have focused on physical library collections. This dissertation research bridged these gaps by (1) extending the scope of resources to various web documents indexed by users and (2) employing the Information Retrieval (IR) Vector Space Model (VSM) - based indexing consistency method since it is suitable for dealing with a large number of indexers. As a second phase, an analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. Finally, to investigate tagging pattern and behaviors, a content analysis on tag attributes was conducted based on the FRBR model. The findings revealed that there was greater consistency over all subjects among taggers compared to that for two groups of professionals. The analysis of tagging exhaustivity and tag specificity in relation to tagging effectiveness was conducted to ameliorate difficulties associated with limitations in the analysis of indexing consistency based on only the quantitative measures of vocabulary matching. Examination of exhaustivity and specificity of social tags provided insights into particular characteristics of tagging behavior and its variation across subjects. To further investigate the quality of tags, a Latent Semantic Analysis (LSA) was conducted to determine to what extent tags are conceptually related to professionals’ keywords and it was found that tags of higher specificity tended to have a higher semantic relatedness to professionals’ keywords. This leads to the conclusion that the term’s power as a differentiator is related to its semantic relatedness to documents. The findings on tag attributes identified the important bibliographic attributes of tags beyond describing subjects or topics of a document. The findings also showed that tags have essential attributes matching those defined in FRBR. Furthermore, in terms of specific subject areas, the findings originally identified that taggers exhibited different tagging behaviors representing distinctive features and tendencies on web documents characterizing digital heterogeneous media resources. These results have led to the conclusion that there should be an increased awareness of diverse user needs by subject in order to improve metadata in practical applications. This dissertation research is the first necessary step to utilize social tagging in digital information organization by verifying the quality and efficacy of social tagging. This dissertation research combined both quantitative (statistics) and qualitative (content analysis using FRBR) approaches to vocabulary analysis of tags which provided a more complete examination of the quality of tags. Through the detailed analysis of tag properties undertaken in this dissertation, we have a clearer understanding of the extent to which social tagging can be used to replace (and in some cases to improve upon) professional indexing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Solidary Economy is an area that has shown unusual traits to what is preached in the traditional economic organizations, even organizations that have very similar principles, as some cooperatives. This trait is approaching the concept of isonomy proposed by Ramos (1989). Given this context, and the notion that the isonomy is like a ideal type, the objective this work was to evidence particulars of isonomic environment the in economic and solidarity experiences, taking as an empirical research area the Grupo de Mulheres Decididas a Vencer, considered a solidary economic enterprise. For this, we used the descriptive-exploratory research of qualitative nature, where the object of such research is the know enterprise, therefore, also characterized as a case study, which were taken as research subjects six associates, they being the most active in the enterprise. From the five categories that characterize isonomy - minimum standards prescribing, self-gratifying activity, activities undertaken as a vocation, wide system of making decision and primaries interpersonal relations - and from the traits of a solidary economic enterprise the data analysis was built, through content analysis, specifically the categorial analysis. Given this context and reality in which it is Grupo de Mulheres Decididas a Vencer, with minimal rules and procedures for conducting activities, comparing them to a therapy, women choosing to insert in that environment, faced with a democratic space and unfettered bureaucracy in professional interpersonal relationships, in others words, an organizational space where they were shown signs of substantive rationality was possible to conclude that the Group will share experiences and characteristics of isonomy. This disclosure meets the multidimensional social that presupposes Paraecomomic Paradigm, enabling man to enter in different social environments of the economy in order to search for self-actualization

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Stress serves as an adaptive mechanism and helps organisms to cope with life-threatening situations. However, individual vulnerability to stress and dysregulation of this system may precipitate stress-related disorders such as depression. The neurobiological circuitry in charge of dealing with stressors has been widely studied in animal models. Recently our group has demonstrated a role for lysophosphatidic acid (LPA) through the LPA1 receptor in vulnerability to stress, in particular the lack of this receptor relates to robust decrease of adult hippocampal neurogenesis and induction of anxious and depressive states. Nevertheless, the specific abnormalities in the limbic circuit in reaction to stress remains unclear. The aim of this study is to examine the differences in the brain activation pattern in the presence or absence of LPA1 receptor after acute stress. For this purpose, we have studied the response of maLPA1-null male mice and normal wild type mice to an intense stressor: Tail Suspension Test. Activation induced by behaviour of brain regions involved in mood regulation was analysed by stereological quantification of c-Fos immunoreactive positive cells. We also conducted multidimensional scaling analysis in order to unravel coativation between structures. Our results revealed hyperactivity of stress-related structures such as amygdala and paraventricular nucleus of the hypothalamus in the knockout model and different patterns of coactivation in both genotypes using a multidimensional map. This data provides further evidence to the engagement of the LPA1 receptors in stress regulation and sheds light on different neural pathways under normal and vulnerability conditions that can lead to mood disorders.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An overview is given of a user interaction monitoring and analysis framework called BaranC. Monitoring and analysing human-digital interaction is an essential part of developing a user model as the basis for investigating user experience. The primary human-digital interaction, such as on a laptop or smartphone, is best understood and modelled in the wider context of the user and their environment. The BaranC framework provides monitoring and analysis capabilities that not only records all user interaction with a digital device (e.g. smartphone), but also collects all available context data (such as from sensors in the digital device itself, a fitness band or a smart appliances). The data collected by BaranC is recorded as a User Digital Imprint (UDI) which is, in effect, the user model and provides the basis for data analysis. BaranC provides functionality that is useful for user experience studies, user interface design evaluation, and providing user assistance services. An important concern for personal data is privacy, and the framework gives the user full control over the monitoring, storing and sharing of their data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Big data are reshaping the way we interact with technology, thus fostering new applications to increase the safety-assessment of foods. An extraordinary amount of information is analysed using machine learning approaches aimed at detecting the existence or predicting the likelihood of future risks. Food business operators have to share the results of these analyses when applying to place on the market regulated products, whereas agri-food safety agencies (including the European Food Safety Authority) are exploring new avenues to increase the accuracy of their evaluations by processing Big data. Such an informational endowment brings with it opportunities and risks correlated to the extraction of meaningful inferences from data. However, conflicting interests and tensions among the involved entities - the industry, food safety agencies, and consumers - hinder the finding of shared methods to steer the processing of Big data in a sound, transparent and trustworthy way. A recent reform in the EU sectoral legislation, the lack of trust and the presence of a considerable number of stakeholders highlight the need of ethical contributions aimed at steering the development and the deployment of Big data applications. Moreover, Artificial Intelligence guidelines and charters published by European Union institutions and Member States have to be discussed in light of applied contexts, including the one at stake. This thesis aims to contribute to these goals by discussing what principles should be put forward when processing Big data in the context of agri-food safety-risk assessment. The research focuses on two interviewed topics - data ownership and data governance - by evaluating how the regulatory framework addresses the challenges raised by Big data analysis in these domains. The outcome of the project is a tentative Roadmap aimed to identify the principles to be observed when processing Big data in this domain and their possible implementations.