973 resultados para Short Loadlength, Fast Algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Post-release survival of line-caught pearl perch (Glaucosoma scapulare) was assessed via field experiments where fish were angled using methods similar to those used by commercial, recreational and charter fishers. One hundred and eighty-three individuals were caught during four experiments, of which >91 survived up to three days post-capture. Hook location was found to be the best predictor of survival, with the survival of throat- or stomach-hooked pearl perch significantly (P < 0.05) lower than those hooked in either the mouth or lip. Post-release survival was similar for both legal (≥35 cm) and sub-legal (<35 cm) pearl perch, while those individuals showing no signs of barotrauma were more likely to survive in the short term. Examination of the swim bladders in the laboratory, combined with observations in the field, revealed that swim bladders rupture during ascent from depth allowing swim bladder gases to escape into the gut cavity. As angled fish approach the surface, the alimentary tract ruptures near the anus allowing swim bladder gases to escape the gut cavity. As a result, very few pearl perch exhibit barotrauma symptoms and no barotrauma mitigation strategies were recommended. The results of this study show that pearl perch are relatively resilient to catch-and-release suggesting that post-release mortality would not contribute significantly to total fishing mortality. We recommend the use of circle hooks, fished actively on tight lines, combined with minimal handling in order to maximise the post-release survival of pearl perch.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spot measurements of methane emission rate (n = 18 700) by 24 Angus steers fed mixed rations from GrowSafe feeders were made over 3- to 6-min periods by a GreenFeed emission monitoring (GEM) unit. The data were analysed to estimate daily methane production (DMP; g/day) and derived methane yield (MY; g/kg dry matter intake (DMI)). A one-compartment dose model of spot emission rate v. time since the preceding meal was compared with the models of Wood (1967) and Dijkstra et al. (1997) and the average of spot measures. Fitted values for DMP were calculated from the area under the curves. Two methods of relating methane and feed intakes were then studied: the classical calculation of MY as DMP/DMI (kg/day); and a novel method of estimating DMP from time and size of preceding meals using either the data for only the two meals preceding a spot measurement, or all meals for 3 days prior. Two approaches were also used to estimate DMP from spot measurements: fitting of splines on a 'per-animal per-day' basis and an alternate approach of modelling DMP after each feed event by least squares (using Solver), summing (for each animal) the contributions from each feed event by best-fitting a one-compartment model. Time since the preceding meal was of limited value in estimating DMP. Even when the meal sizes and time intervals between a spot measurement and all feeding events in the previous 72 h were assessed, only 16.9% of the variance in spot emission rate measured by GEM was explained by this feeding information. While using the preceding meal alone gave a biased (underestimate) of DMP, allowing for a longer feed history removed this bias. A power analysis taking into account the sources of variation in DMP indicated that to obtain an estimate of DMP with a 95% confidence interval within 5% of the observed 64 days mean of spot measures would require 40 animals measured over 45 days (two spot measurements per day) or 30 animals measured over 55 days. These numbers suggest that spot measurements could be made in association with feed efficiency tests made over 70 days. Spot measurements of enteric emissions can be used to define DMP but the number of animals and samples are larger than are needed when day-long measures are made.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ever expanding growth of the wireless access to the Internet in recent years has led to the proliferation of wireless and mobile devices to connect to the Internet. This has created the possibility of mobile devices equipped with multiple radio interfaces to connect to the Internet using any of several wireless access network technologies such as GPRS, WLAN and WiMAX in order to get the connectivity best suited for the application. These access networks are highly heterogeneous and they vary widely in their characteristics such as bandwidth, propagation delay and geographical coverage. The mechanism by which a mobile device switches between these access networks during an ongoing connection is referred to as vertical handoff and it often results in an abrupt and significant change in the access link characteristics. The most common Internet applications such as Web browsing and e-mail make use of the Transmission Control Protocol (TCP) as their transport protocol and the behaviour of TCP depends on the end-to-end path characteristics such as bandwidth and round-trip time (RTT). As the wireless access link is most likely the bottleneck of a TCP end-to-end path, the abrupt changes in the link characteristics due to a vertical handoff may affect TCP behaviour adversely degrading the performance of the application. The focus of this thesis is to study the effect of a vertical handoff on TCP behaviour and to propose algorithms that improve the handoff behaviour of TCP using cross-layer information about the changes in the access link characteristics. We begin this study by identifying the various problems of TCP due to a vertical handoff based on extensive simulation experiments. We use this study as a basis to develop cross-layer assisted TCP algorithms in handoff scenarios involving GPRS and WLAN access networks. We then extend the scope of the study by developing cross-layer assisted TCP algorithms in a broader context applicable to a wide range of bandwidth and delay changes during a handoff. And finally, the algorithms developed here are shown to be easily extendable to the multiple-TCP flow scenario. We evaluate the proposed algorithms by comparison with standard TCP (TCP SACK) and show that the proposed algorithms are effective in improving TCP behavior in vertical handoff involving a wide range of bandwidth and delay of the access networks. Our algorithms are easy to implement in real systems and they involve modifications to the TCP sender algorithm only. The proposed algorithms are conservative in nature and they do not adversely affect the performance of TCP in the absence of cross-layer information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of sequential data is required in many diverse areas such as telecommunications, stock market analysis, and bioinformatics. A basic problem related to the analysis of sequential data is the sequence segmentation problem. A sequence segmentation is a partition of the sequence into a number of non-overlapping segments that cover all data points, such that each segment is as homogeneous as possible. This problem can be solved optimally using a standard dynamic programming algorithm. In the first part of the thesis, we present a new approximation algorithm for the sequence segmentation problem. This algorithm has smaller running time than the optimal dynamic programming algorithm, while it has bounded approximation ratio. The basic idea is to divide the input sequence into subsequences, solve the problem optimally in each subsequence, and then appropriately combine the solutions to the subproblems into one final solution. In the second part of the thesis, we study alternative segmentation models that are devised to better fit the data. More specifically, we focus on clustered segmentations and segmentations with rearrangements. While in the standard segmentation of a multidimensional sequence all dimensions share the same segment boundaries, in a clustered segmentation the multidimensional sequence is segmented in such a way that dimensions are allowed to form clusters. Each cluster of dimensions is then segmented separately. We formally define the problem of clustered segmentations and we experimentally show that segmenting sequences using this segmentation model, leads to solutions with smaller error for the same model cost. Segmentation with rearrangements is a novel variation to the segmentation problem: in addition to partitioning the sequence we also seek to apply a limited amount of reordering, so that the overall representation error is minimized. We formulate the problem of segmentation with rearrangements and we show that it is an NP-hard problem to solve or even to approximate. We devise effective algorithms for the proposed problem, combining ideas from dynamic programming and outlier detection algorithms in sequences. In the final part of the thesis, we discuss the problem of aggregating results of segmentation algorithms on the same set of data points. In this case, we are interested in producing a partitioning of the data that agrees as much as possible with the input partitions. We show that this problem can be solved optimally in polynomial time using dynamic programming. Furthermore, we show that not all data points are candidates for segment boundaries in the optimal solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Printed Circuit Board (PCB) layout design is one of the most important and time consuming phases during equipment design process in all electronic industries. This paper is concerned with the development and implementation of a computer aided PCB design package. A set of programs which operate on a description of the circuit supplied by the user in the form of a data file and subsequently design the layout of a double-sided PCB has been developed. The algorithms used for the design of the PCB optimise the board area and the length of copper tracks used for the interconnections. The output of the package is the layout drawing of the PCB, drawn on a CALCOMP hard copy plotter and a Tektronix 4012 storage graphics display terminal. The routing density (the board area required for one component) achieved by this package is typically 0.8 sq. inch per IC. The package is implemented on a DEC 1090 system in Pascal and FORTRAN and SIGN(1) graphics package is used for display generation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The title-problem has been reduced to that of solving a Fredholm integral equation of the second kind. One end of the cylinder is assumed to be fixed, while the cylinder is deformed by an axial current. The vertical displacement on the upper flat end of the cylinder has been determined from an iterative solution of the Fredholm equation valid for large values of the length. The radial displacement of the curved boundary has also been determined at the middle of the cylinder, by using the iterative solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the short-run effects of economic growth on carbon dioxide emissions from the combustion of fossil fuels and the manufacture of cement for 189 countries over the period 1961-2010. Contrary to what has previously been reported, we conclude that there is no strong evidence that the emissions-income elasticity is larger during individual years of economic expansion as compared to recession. Significant evidence of asymmetry emerges when effects over longer periods are considered. We find that economic growth tends to increase emissions not only in the same year, but also in subsequent years. Delayed effects - especially noticeable in the road transport sector - mean that emissions tend to grow more quickly after booms and more slowly after recessions. Emissions are more sensitive to fluctuations in industrial value added than agricultural value added, with services being an intermediate case. On the expenditure side, growth in consumption and growth in investment have similar implications for national emissions. External shocks have a relatively large emissions impact, and the short-run emissions-income elasticity does not appear to decline as incomes increase. Economic growth and emissions have been more tightly linked in fossil-fuel rich countries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comparative studies on protein structures form an integral part of protein crystallography. Here, a fast method of comparing protein structures is presented. Protein structures are represented as a set of secondary structural elements. The method also provides information regarding preferred packing arrangements and evolutionary dynamics of secondary structural elements. This information is not easily obtained from previous methods. In contrast to those methods, the present one can be used only for proteins with some secondary structure. The method is illustrated with globin folds, cytochromes and dehydrogenases as examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In terms of critical discourse, Liberty contributes to the ongoing aesthetic debate on ‘the sublime.’ Philosopher Immanuel Kant (1724–1804) defined the sublime as a failure of rationality in response to sensory overload: a state where the imagination is suspended, without definitive reference points—a state beyond unequivocal ‘knowing.’ I believe the events of September 11, 2001 eluded our understanding in much the same way, leaving us in a moment of suspension between awe and horror. It was an event that couldn’t be understood in terms of scope or scale. It was a moment of overload, which is so difficult to capture in art. With my work I attempt to rekindle that moment of suspension. Like the events of 9/11, Liberty defies definition. Its form is constantly changing; it is always presenting us with new layers of meaning. Nobody quite had a handle on the events that followed 9/11, because the implications were constantly shifting. In the same way, Liberty cannot be contained or defined at any moment in time. Like the events of 9/11, the full story cannot be told in a snapshot. One of the dictionary definitions for the word ‘sublime’ is the conversion of ‘a solid substance directly into a gas, without there being an intermediate liquid phase’. With this in mind, I would like to present Liberty as a work that is literally ‘sublime.’ But what’s really interesting to me about Liberty is that it presents the sublime on all levels: in its medium, in its subject matter (that moment of suspension), and in its formal (formless) presentation. On every level Liberty is sublime—subverting all tangible reference points and eluding capture entirely. Liberty is based on the Statue of Liberty in New York. However, unlike that statue which has stood in New York since 1886 and can be reasonably expected to stand for millennia, this work takes on diminishing proportions, carved as it is in carbon dioxide, a mysterious, previously unexplored medium—one which smokes, snows and dramatically vanishes into a harmless gas. Like the material this work is carved from, the civil liberties of the free world are diminishing fast, since 9/11 and before. This was my thought when I first conceived this work. Now it’s become evident that Liberty expresses a lot more than just this: it demonstrates the erosion of civil liberties, yes. However, it also presents the intangible, indefinable moments in the days and months that followed 9/11. The sculptural work will last for only a short time, and thereafter will exist only in documentation. During this time, the form is continually changing and self-refining, until it disappears entirely, to be inhaled, metabolised and literally taken to heart by viewers.