995 resultados para Data curvature
Resumo:
Establish an internet platform where spatially referenced data can be viewed, entered and stored.
Resumo:
Development of an internet based spatial data delivery and reporting system for the Australian Cotton Industry.
Resumo:
The relationship between major depressive disorder (MDD) and bipolar disorder (BD) remains controversial. Previous research has reported differences and similarities in risk factors for MDD and BD, such as predisposing personality traits. For example, high neuroticism is related to both disorders, whereas openness to experience is specific for BD. This study examined the genetic association between personality and MDD and BD by applying polygenic scores for neuroticism, extraversion, openness to experience, agreeableness and conscientiousness to both disorders. Polygenic scores reflect the weighted sum of multiple single-nucleotide polymorphism alleles associated with the trait for an individual and were based on a meta-analysis of genome-wide association studies for personality traits including 13,835 subjects. Polygenic scores were tested for MDD in the combined Genetic Association Information Network (GAIN-MDD) and MDD2000+ samples (N=8921) and for BD in the combined Systematic Treatment Enhancement Program for Bipolar Disorder and Wellcome Trust Case-Control Consortium samples (N=6329) using logistic regression analyses. At the phenotypic level, personality dimensions were associated with MDD and BD. Polygenic neuroticism scores were significantly positively associated with MDD, whereas polygenic extraversion scores were significantly positively associated with BD. The explained variance of MDD and BD, approximately 0.1%, was highly comparable to the variance explained by the polygenic personality scores in the corresponding personality traits themselves (between 0.1 and 0.4%). This indicates that the proportions of variance explained in mood disorders are at the upper limit of what could have been expected. This study suggests shared genetic risk factors for neuroticism and MDD on the one hand and for extraversion and BD on the other.
Resumo:
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.
Resumo:
Handedness refers to a consistent asymmetry in skill or preferential use between the hands and is related to lateralization within the brain of other functions such as language. Previous twin studies of handedness have yielded inconsistent results resulting from a general lack of statistical power to find significant effects. Here we present analyses from a large international collaborative study of handedness (assessed by writing/drawing or self report) in Australian and Dutch twins and their siblings (54,270 individuals from 25,732 families). Maximum likelihood analyses incorporating the effects of known covariates (sex, year of birth and birth weight) revealed no evidence of hormonal transfer, mirror imaging or twin specific effects. There were also no differences in prevalence between zygosity groups or between twins and their singleton siblings. Consistent with previous meta-analyses, additive genetic effects accounted for about a quarter (23.64%) of the variance (95%CI 20.17, 27.09%) with the remainder accounted for by non-shared environmental influences. The implications of these findings for handedness both as a primary phenotype and as a covariate in linkage and association analyses are discussed.
Resumo:
Many fisheries worldwide have adopted vessel monitoring systems (VMS) for compliance purposes. An added benefit of these systems is that they collect a large amount of data on vessel locations at very fine spatial and temporal scales. This data can provide a wealth of information for stock assessment, research, and management. However, since most VMS implementations record vessel location at set time intervals with no regard to vessel activity, some methodology is required to determine which data records correspond to fishing activity. This paper describes a probabilistic approach, based on hidden Markov models (HMMs), to determine vessel activity. A HMM provides a natural framework for the problem and, by definition, models the intrinsic temporal correlation of the data. The paper describes the general approach that was developed and presents an example of this approach applied to the Queensland trawl fishery off the coast of eastern Australia. Finally, a simulation experiment is presented that compares the misallocation rates of the HMM approach with other approaches.
Resumo:
Standardised time series of fishery catch rates require collations of fishing power data on vessel characteristics. Linear mixed models were used to quantify fishing power trends and study the effect of missing data encountered when relying on commercial logbooks. For this, Australian eastern king prawn (Melicertus plebejus) harvests were analysed with historical (from vessel surveys) and current (from commercial logbooks) vessel data. Between 1989 and 2010, fishing power increased up to 76%. To date, both forward-filling and, alternatively, omitting records with missing vessel information from commercial logbooks produce broadly similar fishing power increases and standardised catch rates, due to the strong influence of years with complete vessel data (16 out of 23 years of data). However, if gaps in vessel information had not originated randomly and skippers from the most efficient vessels were the most diligent at filling in logbooks, considerable errors would be introduced. Also, the buffering effect of complete years would be short lived as years with missing data accumulate. Given ongoing changes in fleet profile with high-catching vessels fishing proportionately more of the fleet’s effort, compliance with logbook completion, or alternatively ongoing vessel gear surveys, is required for generating accurate estimates of fishing power and standardised catch rates.
Resumo:
This paper presents a Chance-constraint Programming approach for constructing maximum-margin classifiers which are robust to interval-valued uncertainty in training examples. The methodology ensures that uncertain examples are classified correctly with high probability by employing chance-constraints. The main contribution of the paper is to pose the resultant optimization problem as a Second Order Cone Program by using large deviation inequalities, due to Bernstein. Apart from support and mean of the uncertain examples these Bernstein based relaxations make no further assumptions on the underlying uncertainty. Classifiers built using the proposed approach are less conservative, yield higher margins and hence are expected to generalize better than existing methods. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle interval-valued uncertainty than state-of-the-art.
Resumo:
A compilation of crystal structure data on deoxyribo- and ribonucleosides and their higher derivatives is presented. The aim of this paper is to highlight the flexibility of deoxyribose and ribose rings. So far, the conformational parameters of nucleic acids constituents of ribose and deoxyribose have not been analysed separately. This paper aims to correlate the conformational parameters with the nature and puckering of the sugar. Deoxyribose puckering occurs in the C2′ endo region while ribose puckering is observed both in the C3′ endo and C2′ endo regions. A few endocyclic and exocyclic bond angles depend on the puckering and the nature of the sugar. The majority of structures have an anti conformation about the glycosyl bond. There appears to be a puckering dependence on the torsion angle about the C4′---C5′ bonds. Such stereochemical information is useful in model building studies of polynucleotides and nucleic acids.
Resumo:
During the last decades there has been a global shift in forest management from a focus solely on timber management to ecosystem management that endorses all aspects of forest functions: ecological, economic and social. This has resulted in a shift in paradigm from sustained yield to sustained diversity of values, goods and benefits obtained at the same time, introducing new temporal and spatial scales into forest resource management. The purpose of the present dissertation was to develop methods that would enable spatial and temporal scales to be introduced into the storage, processing, access and utilization of forest resource data. The methods developed are based on a conceptual view of a forest as a hierarchically nested collection of objects that can have a dynamically changing set of attributes. The temporal aspect of the methods consists of lifetime management for the objects and their attributes and of a temporal succession linking the objects together. Development of the forest resource data processing method concentrated on the extensibility and configurability of the data content and model calculations, allowing for a diverse set of processing operations to be executed using the same framework. The contribution of this dissertation to the utilisation of multi-scale forest resource data lies in the development of a reference data generation method to support forest inventory methods in approaching single-tree resolution.
Resumo:
Patterns of movement in aquatic animals reflect ecologically important behaviours. Cyclical changes in the abiotic environment influence these movements, but when multiple processes occur simultaneously, identifying which is responsible for the observed movement can be complex. Here we used acoustic telemetry and signal processing to define the abiotic processes responsible for movement patterns in freshwater whiprays (Himantura dalyensis). Acoustic transmitters were implanted into the whiprays and their movements detected over 12 months by an array of passive acoustic receivers, deployed throughout 64 km of the Wenlock River, Qld, Australia. The time of an individual's arrival and departure from each receiver detection field was used to estimate whipray location continuously throughout the study. This created a linear-movement-waveform for each whipray and signal processing revealed periodic components within the waveform. Correlation of movement periodograms with those from abiotic processes categorically illustrated that the diel cycle dominated the pattern of whipray movement during the wet season, whereas tidal and lunar cycles dominated during the dry season. The study methodology represents a valuable tool for objectively defining the relationship between abiotic processes and the movement patterns of free-ranging aquatic animals and is particularly expedient when periods of no detection exist within the animal location data.
Resumo:
Abstract of Macbeth, G. M., Broderick, D., Buckworth, R. & Ovenden, J. R. (In press, Feb 2013). Linkage disequilibrium estimation of effective population size with immigrants from divergent populations: a case study on Spanish mackerel (Scomberomorus commerson). G3: Genes, Genomes and Genetics. Estimates of genetic effective population size (Ne) using molecular markers are a potentially useful tool for the management of endangered through to commercial species. But, pitfalls are predicted when the effective size is large, as estimates require large numbers of samples from wild populations for statistical validity. Our simulations showed that linkage disequilibrium estimates of Ne up to 10,000 with finite confidence limits can be achieved with sample sizes around 5000. This was deduced from empirical allele frequencies of seven polymorphic microsatellite loci in a commercially harvested fisheries species, the narrow barred Spanish mackerel (Scomberomorus commerson). As expected, the smallest standard deviation of Ne estimates occurred when low frequency alleles were excluded. Additional simulations indicated that the linkage disequilibrium method was sensitive to small numbers of genotypes from cryptic species or conspecific immigrants. A correspondence analysis algorithm was developed to detect and remove outlier genotypes that could possibly be inadvertently sampled from cryptic species or non-breeding immigrants from genetically separate populations. Simulations demonstrated the value of this approach in Spanish mackerel data. When putative immigrants were removed from the empirical data, 95% of the Ne estimates from jacknife resampling were above 24,000.
Resumo:
Tetrapeptide sequences of the type Z-Pro-Y-X were obtained from the crystal structure data on 34 globular proteins, and used in an analysis of the positional preferences of the individual amino acid residues in the β-turn conformation. The effect of fixing proline as the second position residue in the tetrapeptide sequence was studied by comparing the data obtained on the positional preferences with the corresponding data obtained by Chou and Fasman using the Z-R-Y-X sequence, where no particular residue was fixed in any of the four positions. While, in general, several amino acid residues having relatively very high or very low preferences for specific positions were found to be common to both the Z-Pro-Y-X and Z-R-Y-X sequences, many significant differences were found between the two sets of data, which are to be attributed to specific interactions arising from the presence of the proline residue.
Resumo:
Data flow computers are high-speed machines in which an instruction is executed as soon as all its operands are available. This paper describes the EXtended MANchester (EXMAN) data flow computer which incorporates three major extensions to the basic Manchester machine. As extensions we provide a multiple matching units scheme, an efficient, implementation of array data structure, and a facility to concurrently execute reentrant routines. A simulator for the EXMAN computer has been coded in the discrete event simulation language, SIMULA 67, on the DEC 1090 system. Performance analysis studies have been conducted on the simulated EXMAN computer to study the effectiveness of the proposed extensions. The performance experiments have been carried out using three sample problems: matrix multiplication, Bresenham's line drawing algorithm, and the polygon scan-conversion algorithm.