962 resultados para data complexity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of previously published sets of DNA microarray gene expression data by singular value decomposition has uncovered underlying patterns or “characteristic modes” in their temporal profiles. These patterns contribute unequally to the structure of the expression profiles. Moreover, the essential features of a given set of expression profiles are captured using just a small number of characteristic modes. This leads to the striking conclusion that the transcriptional response of a genome is orchestrated in a few fundamental patterns of gene expression change. These patterns are both simple and robust, dominating the alterations in expression of genes throughout the genome. Moreover, the characteristic modes of gene expression change in response to environmental perturbations are similar in such distant organisms as yeast and human cells. This analysis reveals simple regularities in the seemingly complex transcriptional transitions of diverse cells to new states, and these provide insights into the operation of the underlying genetic networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DNA probes from the L6 rust resistance gene of flax (Linum usitatissimum) hybridize to resistance genes at the unlinked M locus, indicating sequence similarities between genes at the two loci. Genetic and molecular data indicate that the L locus is simple and contains a single gene with 13 alleles and that the M locus is complex and contains a tandem array of genes of similar sequence. Thus the evolution of these two related loci has been different. The consequence of the contrasting structures of the L and M loci on the evolution of different rust resistance specificities can now be investigated at the molecular level

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Citizens demand more and more data for making decisions in their daily life. Therefore, mechanisms that allow citizens to understand and analyze linked open data (LOD) in a user-friendly manner are highly required. To this aim, the concept of Open Business Intelligence (OpenBI) is introduced in this position paper. OpenBI facilitates non-expert users to (i) analyze and visualize LOD, thus generating actionable information by means of reporting, OLAP analysis, dashboards or data mining; and to (ii) share the new acquired information as LOD to be reused by anyone. One of the most challenging issues of OpenBI is related to data mining, since non-experts (as citizens) need guidance during preprocessing and application of mining algorithms due to the complexity of the mining process and the low quality of the data sources. This is even worst when dealing with LOD, not only because of the different kind of links among data, but also because of its high dimensionality. As a consequence, in this position paper we advocate that data mining for OpenBI requires data quality-aware mechanisms for guiding non-expert users in obtaining and sharing the most reliable knowledge from the available LOD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex systems in causal relationships are known to be circular rather than linear; this means that a particular result is not produced by a single cause, but rather that both positive and negative feedback processes are involved. However, although interpreting systemic interrelationships requires a language formed by circles, this has only been developed at the diagram level, and not from an axiomatic point of view. The first difficulty encountered when analysing any complex system is that usually the only data available relate to the various variables, so the first objective was to transform these data into cause-and-effect relationships. Once this initial step was taken, our discrete chaos theory could be applied by finding the causal circles that will form part of the system attractor and allow their behavior to be interpreted. As an application of the technique presented, we analyzed the system associated with the transcription factors of inflammatory diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PAS1192-2 (2013) outlines the “fundamental principles of Level 2 information modeling”, one of these principles is the use of what is commonly referred to as a Common Data Environment (CDE). A CDE could be described as an internet-enabled cloudhosting platform, accessible to all construction team members to access shared project information. For the construction sector to achieve increased productivity goals, the next generation of industry professionals will need to be educated in a way that provides them with an appreciation of Building Information Modelling (BIM) working methods, at all levels, including an understanding of how data in a CDE should be structured, managed, shared and published. This presents a challenge for educational institutions in terms of providing a CDE that addresses the requirements set out in PAS1192-2, and mirrors organisational and professional working practices without causing confusion due to over complexity. This paper presents the findings of a two-year study undertaken at Ulster University comparing the use of a leading industry CDE platform with one derived from the in-house Virtual Learning Environment (VLE), for the delivery of a student BIM project. The research methodology employed was a qualitative case study analysis, focusing on observations from the academics involved and feedback from students. The results of the study show advantages for both CDE platforms depending on the learning outcomes required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a globalised world, knowledge of foreign languages is an important skill. Especially in Europe, with its 24 official languages and its countless regional and minority languages, foreign language skills are a key asset in the labour market. Earlier research shows that over half of the EU27 population is able to speak at least one foreign language, but there is substantial national variation. This study is devoted to a group of countries known as the Visegrad Four, which comprises the Czech Republic, Hungary, Poland and Slovakia. Although the supply of foreign language skills in these countries appears to be well-documented, less is known about the demand side. In this study, we therefore examine the demand for foreign language skills on the Visegrad labour markets, using information extracted from online job portals. We find that English is the most requested foreign language in the region, and the demand for English language skills appears to go up as occupations become increasingly complex. Despite the cultural, historical and economic ties with their German-speaking neighbours, German is the second-most-in-demand foreign language in the region. Interestingly, in this case there is no clear link with the complexity of an occupation. Other languages, such as French, Spanish and Russian, are hardly requested. These findings have important policy implications with regards to the education and training offered in schools, universities and job centres.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abrupt climate changes from 18 to 15 thousand years before present (kyr BP) associated with Heinrich Event 1 (HE1) had a strong impact on vegetation patterns not only at high latitudes of the Northern Hemisphere, but also in the tropical regions around the Atlantic Ocean. To gain a better understanding of the linkage between high and low latitudes, we used the University of Victoria (UVic) Earth System-Climate Model (ESCM) with dynamical vegetation and land surface components to simulate four scenarios of climate-vegetation interaction: the pre-industrial era, the Last Glacial Maximum (LGM), and a Heinrich-like event with two different climate backgrounds (interglacial and glacial). We calculated mega-biomes from the plant-functional types (PFTs) generated by the model to allow for a direct comparison between model results and palynological vegetation reconstructions. Our calculated mega-biomes for the pre-industrial period and the LGM corresponded well with biome reconstructions of the modern and LGM time slices, respectively, except that our pre-industrial simulation predicted the dominance of grassland in southern Europe and our LGM simulation resulted in more forest cover in tropical and sub-tropical South America. The HE1-like simulation with a glacial climate background produced sea-surface temperature patterns and enhanced inter-hemispheric thermal gradients in accordance with the "bipolar seesaw" hypothesis. We found that the cooling of the Northern Hemisphere caused a southward shift of those PFTs that are indicative of an increased desertification and a retreat of broadleaf forests in West Africa and northern South America. The mega-biomes from our HE1 simulation agreed well with paleovegetation data from tropical Africa and northern South America. Thus, according to our model-data comparison, the reconstructed vegetation changes for the tropical regions around the Atlantic Ocean were physically consistent with the remote effects of a Heinrich event under a glacial climate background.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Children aged between 3 and 7 years were taught simple and dimension-abstracted oddity discrimination using learning-set training techniques, in which isomorphic problems with varying content were presented with verbal explanation and feedback. Following the training phase, simple oddity (SO), dimension-abstracted oddity with one or two irrelevant dimensions, and non-oddity (NO) tasks were presented (without feedback) to determine the basis of solution. Although dimension-abstracted oddity requires discrimination based on a stimulus that is different from the others, which are all the same as each other on the relevant dimension, this was not the major strategy. The data were more consistent with use of a simple oddity strategy by 3- to 4-year-olds, and a most different strategy by 6- to 7-year-olds. These strategies are interpreted as reducing task complexity. (C) 2002 Elsevier Science Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In simultaneous analyses of multiple data partitions, the trees relevant when measuring support for a clade are the optimal tree, and the best tree lacking the clade (i.e., the most reasonable alternative). The parsimony-based method of partitioned branch support (PBS) forces each data set to arbitrate between the two relevant trees. This value is the amount each data set contributes to clade support in the combined analysis, and can be very different to support apparent in separate analyses. The approach used in PBS can also be employed in likelihood: a simultaneous analysis of all data retrieves the maximum likelihood tree, and the best tree without the clade of interest is also found. Each data set is fitted to the two trees and the log-likelihood difference calculated, giving partitioned likelihood support (PLS) for each data set. These calculations can be performed regardless of the complexity of the ML model adopted. The significance of PLS can be evaluated using a variety of resampling methods, such as the Kishino-Hasegawa test, the Shimodiara-Hasegawa test, or likelihood weights, although the appropriateness and assumptions of these tests remains debated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Capturing the voices of women when the issue is of a sensitive nature has been a major concern of feminist researchers. It has often been argued that interpretive methods are the most appropriate way to collect such information, but there are other appropriate ways to approach the design of research. This article explores the use of a mixed-method approach to collect data on incontinence in older women and argues for the use of a variety of creative approaches to collect and analyze data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides an analysis of data from a state-wide survey of statutory child protection workers, adult mental health workers, and child mental health workers. Respondents provided details of their experience of collaboration on cases where a parent had mental health problems and there were serious child protection concerns. The survey was conducted as part of a large mixed-method research project on developing best practice at the intersection of child protection and mental health services. Descriptions of 300 cases were provided by 122 respondents. Analyses revealed that a great deal of collaboration occur-red across a wide range of government and community-based agencies; that collaborative processes were often positive and rewarding for workers; and that collaboration was most difficult when the nature of the parental mental illness or the need for child protection intervention was contested. The difficulties experienced included communication, role clarity, competing primary focus, contested parental mental health needs, contested child protection needs, and resources. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New tools derived from advances in molecular biology have not been widely adopted in plant breeding for complex traits because of the inability to connect information at gene level to the phenotype in a manner that is useful for selection. In this study, we explored whether physiological dissection and integrative modelling of complex traits could link phenotype complexity to underlying genetic systems in a way that enhanced the power of molecular breeding strategies. A crop and breeding system simulation study on sorghum, which involved variation in 4 key adaptive traits-phenology, osmotic adjustment, transpiration efficiency, stay-green-and a broad range of production environments in north-eastern Australia, was used. The full matrix of simulated phenotypes, which consisted of 547 location-season combinations and 4235 genotypic expression states, was analysed for genetic and environmental effects. The analysis was conducted in stages assuming gradually increased understanding of gene-to-phenotype relationships, which would arise from physiological dissection and modelling. It was found that environmental characterisation and physiological knowledge helped to explain and unravel gene and environment context dependencies in the data. Based on the analyses of gene effects, a range of marker-assisted selection breeding strategies was simulated. It was shown that the inclusion of knowledge resulting from trait physiology and modelling generated an enhanced rate of yield advance over cycles of selection. This occurred because the knowledge associated with component trait physiology and extrapolation to the target population of environments by modelling removed confounding effects associated with environment and gene context dependencies for the markers used. Developing and implementing this gene-to-phenotype capability in crop improvement requires enhanced attention to phenotyping, ecophysiological modelling, and validation studies to test the stability of candidate genetic regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Recent data from Education Queensland has identified rising numbers of children receiving diagnoses of autistic spectrum disorder (ASD). Faced with funding diagnostic pressures, in clinical situations that are complex and inherently uncertain, it is possible that specialists err on the side of a positive diagnosis. This study examines the extent to which possible overinclusion of ASD diagnosis may exist in the presence of uncertainty and factors potentially related to this practice in Queensland. Methods: Using anonymous self-report, all Queensland child psychiatrists and paediatricians who see paediatric patients with development/behavioural problems were surveyed and asked whether they had ever specified an ASD diagnosis in the presence of diagnostic uncertainty. Using logistic regression, elicited responses to the diagnostic uncertainty questions were related to other clinical- and practice-related characteristics. Results: Overall, 58% of surveyed psychiatrists and paediatricians indicated that, in the face of diagnostic uncertainty, they had erred on the side of providing an ASD diagnosis for educational ascertainment and 36% of clinicians had provided an autism diagnosis for Carer's Allowance when Centrelink diagnostic specifications had not been met. Conclusion: In the absence of definitive biological markers, ASD remains a behavioural diagnosis that is often complex and uncertain. In response to systems that demand a categorical diagnostic response, specialists are providing ASD diagnoses, even when uncertain. The motivation for this practice appears to be a clinical risk/benefit analysis of what will achieve the best outcomes for children. It is likely that these practices will continue unless systems change eligibility to funding based on functional impairment rather than medical diagnostic categories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electricity market price forecast is a changeling yet very important task for electricity market managers and participants. Due to the complexity and uncertainties in the power grid, electricity prices are highly volatile and normally carry with spikes. which may be (ens or even hundreds of times higher than the normal price. Such electricity spikes are very difficult to be predicted. So far. most of the research on electricity price forecast is based on the normal range electricity prices. This paper proposes a data mining based electricity price forecast framework, which can predict the normal price as well as the price spikes. The normal price can be, predicted by a previously proposed wavelet and neural network based forecast model, while the spikes are forecasted based on a data mining approach. This paper focuses on the spike prediction and explores the reasons for price spikes based on the measurement of a proposed composite supply-demand balance index (SDI) and relative demand index (RDI). These indices are able to reflect the relationship among electricity demand, electricity supply and electricity reserve capacity. The proposed model is based on a mining database including market clearing price, trading hour. electricity), demand, electricity supply and reserve. Bayesian classification and similarity searching techniques are used to mine the database to find out the internal relationships between electricity price spikes and these proposed. The mining results are used to form the price spike forecast model. This proposed model is able to generate forecasted price spike, level of spike and associated forecast confidence level. The model is tested with the Queensland electricity market data with promising results. Crown Copyright (C) 2004 Published by Elsevier B.V. All rights reserved.