947 resultados para Algorithmic skeleton
Resumo:
A fundamental task of vision systems is to infer the state of the world given some form of visual observations. From a computational perspective, this often involves facing an ill-posed problem; e.g., information is lost via projection of the 3D world into a 2D image. Solution of an ill-posed problem requires additional information, usually provided as a model of the underlying process. It is important that the model be both computationally feasible as well as theoretically well-founded. In this thesis, a probabilistic, nonlinear supervised computational learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human body or human hands, given images obtained via one or more uncalibrated cameras. The SMA consists of several specialized forward mapping functions that are estimated automatically from training data, and a possibly known feedback function. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). A probabilistic model for the architecture is first formalized. Solutions to key algorithmic problems are then derived: simultaneous learning of the specialized domains along with the mapping functions, as well as performing inference given inputs and a feedback function. The SMA employs a variant of the Expectation-Maximization algorithm and approximate inference. The approach allows the use of alternative conditional independence assumptions for learning and inference, which are derived from a forward model and a feedback model. Experimental validation of the proposed approach is conducted in the task of estimating articulated body pose from image silhouettes. Accuracy and stability of the SMA framework is tested using artificial data sets, as well as synthetic and real video sequences of human bodies and hands.
Resumo:
Overlay networks have emerged as a powerful and highly flexible method for delivering content. We study how to optimize throughput of large, multipoint transfers across richly connected overlay networks, focusing on the question of what to put in each transmitted packet. We first make the case for transmitting encoded content in this scenario, arguing for the digital fountain approach which enables end-hosts to efficiently restitute the original content of size n from a subset of any n symbols from a large universe of encoded symbols. Such an approach affords reliability and a substantial degree of application-level flexibility, as it seamlessly tolerates packet loss, connection migration, and parallel transfers. However, since the sets of symbols acquired by peers are likely to overlap substantially, care must be taken to enable them to collaborate effectively. We provide a collection of useful algorithmic tools for efficient estimation, summarization, and approximate reconciliation of sets of symbols between pairs of collaborating peers, all of which keep messaging complexity and computation to a minimum. Through simulations and experiments on a prototype implementation, we demonstrate the performance benefits of our informed content delivery mechanisms and how they complement existing overlay network architectures.
Resumo:
Overlay networks have become popular in recent times for content distribution and end-system multicasting of media streams. In the latter case, the motivation is based on the lack of widespread deployment of IP multicast and the ability to perform end-host processing. However, constructing routes between various end-hosts, so that data can be streamed from content publishers to many thousands of subscribers, each having their own QoS constraints, is still a challenging problem. First, any routes between end-hosts using trees built on top of overlay networks can increase stress on the underlying physical network, due to multiple instances of the same data traversing a given physical link. Second, because overlay routes between end-hosts may traverse physical network links more than once, they increase the end-to-end latency compared to IP-level routing. Third, algorithms for constructing efficient, large-scale trees that reduce link stress and latency are typically more complex. This paper therefore compares various methods to construct multicast trees between end-systems, that vary in terms of implementation costs and their ability to support per-subscriber QoS constraints. We describe several algorithms that make trade-offs between algorithmic complexity, physical link stress and latency. While no algorithm is best in all three cases we show how it is possible to efficiently build trees for several thousand subscribers with latencies within a factor of two of the optimal, and link stresses comparable to, or better than, existing technologies.
Resumo:
Trophoblasts of the placenta are the frontline cells involved in communication and exchange of materials between the mother and fetus. Within trophoblasts, calcium signalling proteins are richly expressed. Intracellular free calcium ions are a key second messenger, regulating various cellular activities. Transcellular Ca2+ transport through trophoblasts is essential in fetal skeleton formation. Ryanodine receptors (RyRs) are high conductance cation channels that mediate Ca2+ release from intracellular stores to the cytoplasm. To date, the roles of RyRs in trophoblasts have not been reported. By use of reverse transcription PCR and western blotting, the current study revealed that RyRs are expressed in model trophoblast cell lines (BeWo and JEG-3) and in human first trimester and term placental villi. Immunohistochemistry of human placental sections indicated that both syncytiotrophoblast and cytotrophoblast cell layers were positively stained by antibodies recognising RyRs; likewise, expression of RyR isoforms was also revealed in BeWo and JEG-3 cells by immunofluorescence microscopy. In addition, changes in [Ca2+]i were observed in both BeWo and JEG-3 cells upon application of various RyR agonists and antagonists, using fura-2 fluorescent videomicroscopy. Furthermore, endogenous placental peptide hormones, namely angiotensin II, arginine vasopressin and endothelin 1, were demonstrated to increase [Ca2+]i in BeWo cells, and such increases were suppressed by RyR antagonists and by blockers of the corresponding peptide hormone receptors. These findings indicate that 1) multiple RyR subtypes are expressed in human trophoblasts; 2) functional RyRs in BeWo and JEG-3 cells response to both RyR agonists and antagonists; 3) RyRs in BeWo cells mediate Ca2+ release from intracellular store in response to the indirect stimulation by endogenous peptides. These observations suggest that RyR contributes to trophoblastic cellular Ca2+ homeostasis; trophoblastic RyRs are also involved in the functional regulation of human placenta by coupling to endogenous placental peptide-induced signalling pathways.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
Simulation of pedestrian evacuations of smart buildings in emergency is a powerful tool for building analysis, dynamic evacuation planning and real-time response to the evolving state of evacuations. Macroscopic pedestrian models are low-complexity models that are and well suited to algorithmic analysis and planning, but are quite abstract. Microscopic simulation models allow for a high level of simulation detail but can be computationally intensive. By combining micro- and macro- models we can use each to overcome the shortcomings of the other and enable new capability and applications for pedestrian evacuation simulation that would not be possible with either alone. We develop the EvacSim multi-agent pedestrian simulator and procedurally generate macroscopic flow graph models of building space, integrating micro- and macroscopic approaches to simulation of the same emergency space. By “coupling” flow graph parameters to microscopic simulation results, the graph model captures some of the higher detail and fidelity of the complex microscopic simulation model. The coupled flow graph is used for analysis and prediction of the movement of pedestrians in the microscopic simulation, and investigate the performance of dynamic evacuation planning in simulated emergencies using a variety of strategies for allocation of macroscopic evacuation routes to microscopic pedestrian agents. The predictive capability of the coupled flow graph is exploited for the decomposition of microscopic simulation space into multiple future states in a scalable manner. By simulating multiple future states of the emergency in short time frames, this enables sensing strategy based on simulation scenario pattern matching which we show to achieve fast scenario matching, enabling rich, real-time feedback in emergencies in buildings with meagre sensing capabilities.
Resumo:
In our continuing study of triterpene derivatives as potent anti-HIV agents, different C-3 conformationally restricted betulinic acid (BA, 1) derivatives were designed and synthesized in order to explore the conformational space of the C-3 pharmacophore. 3-O-Monomethylsuccinyl-betulinic acid (MSB) analogues were also designed to better understand the contribution of the C-3' dimethyl group of bevirimat (2), the first-in-class HIV maturation inhibitor, which is currently in phase IIb clinical trials. In addition, another triterpene skeleton, moronic acid (MA, 3), was also employed to study the influence of the backbone and the C-3 modification toward the anti-HIV activity of this compound class. This study enabled us to better understand the structure-activity relationships (SAR) of triterpene-derived anti-HIV agents and led to the design and synthesis of compound 12 (EC(50): 0.0006 microM), which displayed slightly better activity than 2 as a HIV-1 maturation inhibitor.
Resumo:
The skeleton is of fundamental importance in research in comparative vertebrate morphology, paleontology, biomechanics, developmental biology, and systematics. Motivated by research questions that require computational access to and comparative reasoning across the diverse skeletal phenotypes of vertebrates, we developed a module of anatomical concepts for the skeletal system, the Vertebrate Skeletal Anatomy Ontology (VSAO), to accommodate and unify the existing skeletal terminologies for the species-specific (mouse, the frog Xenopus, zebrafish) and multispecies (teleost, amphibian) vertebrate anatomy ontologies. Previous differences between these terminologies prevented even simple queries across databases pertaining to vertebrate morphology. This module of upper-level and specific skeletal terms currently includes 223 defined terms and 179 synonyms that integrate skeletal cells, tissues, biological processes, organs (skeletal elements such as bones and cartilages), and subdivisions of the skeletal system. The VSAO is designed to integrate with other ontologies, including the Common Anatomy Reference Ontology (CARO), Gene Ontology (GO), Uberon, and Cell Ontology (CL), and it is freely available to the community to be updated with additional terms required for research. Its structure accommodates anatomical variation among vertebrate species in development, structure, and composition. Annotation of diverse vertebrate phenotypes with this ontology will enable novel inquiries across the full spectrum of phenotypic diversity.
Resumo:
Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.
The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.
The main contributions of the thesis can be placed in one of the following categories.
1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.
2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.
3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.
4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.
Resumo:
Estimation of the skeleton of a directed acyclic graph (DAG) is of great importance for understanding the underlying DAG and causal effects can be assessed from the skeleton when the DAG is not identifiable. We propose a novel method named PenPC to estimate the skeleton of a high-dimensional DAG by a two-step approach. We first estimate the nonzero entries of a concentration matrix using penalized regression, and then fix the difference between the concentration matrix and the skeleton by evaluating a set of conditional independence hypotheses. For high-dimensional problems where the number of vertices p is in polynomial or exponential scale of sample size n, we study the asymptotic property of PenPC on two types of graphs: traditional random graphs where all the vertices have the same expected number of neighbors, and scale-free graphs where a few vertices may have a large number of neighbors. As illustrated by extensive simulations and applications on gene expression data of cancer patients, PenPC has higher sensitivity and specificity than the state-of-the-art method, the PC-stable algorithm.
Resumo:
The skeleton is the first and most common site of distant relapse in breast and prostate carcinomas. Tumor bone disease is responsible for a considerable morbidity, which also makes major demands on resources for healthcare provision. Increased bone resorption in tumor bone disease appears to be essentially mediated by the ostoclasts, explaining why bisphosphonates have been successfully used for the treatment of malignant ostolysis. Hypercalcemia occurs in 10-20% of the patients with advanced cancer, and the uncoupling between bone resorption and bone formation is easily demonstrated by the measurement of bone markers. The differential diagnosis between tumor-induced hypercalcemia and primary hyperparathyroidism is most often easy when using intact parathyroid hormone (PTH) assays; moreover, parathyroid hormone-related protein (PTHrP) determination can be useful in selected cases. The diagnosis of bone metastases is often easy when the patient is symptomatic. The diagnostic usefulness of bone markers is limited, and the available data indicate that bone markers are so far unsuitable for an early diagnosis of neoplastic skeletal involvement on an individual basis. However, by combining bone-specific alkaline phosphatase (BALP) or modern bone resorption markers with specific tumor markers, such as PSA or CA15.3, the diagnostic sensitivity of bone markers can be improved. Their degree of elevation correlates with the tumor burden and has been shown to be an independent prognostic factor for several tumors. On the other hand, biochemical markers of bone turnover have the unique potential to simplify and improve the monitoring of metastatic bone disease, which remains a continuous challenge for the oncologist. Peptide-bound cross-links could be quite useful to discriminate between patients progressing early on treatment from those with longer disease control. Also, the diagnostic efficiency of a 50% increase in these markers could identify imminent progression. © 2006 Elsevier Inc. All rights reserved.
Resumo:
p.161-171
Resumo:
The most common parallelisation strategy for many Computational Mechanics (CM) (typified by Computational Fluid Dynamics (CFD) applications) which use structured meshes, involves a 1D partition based upon slabs of cells. However, many CFD codes employ pipeline operations in their solution procedure. For parallelised versions of such codes to scale well they must employ two (or more) dimensional partitions. This paper describes an algorithmic approach to the multi-dimensional mesh partitioning in code parallelisation, its implementation in a toolkit for almost automatically transforming scalar codes to parallel form, and its testing on a range of ‘real-world’ FORTRAN codes. The concept of multi-dimensional partitioning is straightforward, but non-trivial to represent as a sufficiently generic algorithm so that it can be embedded in a code transformation tool. The results of the tests on fine real-world codes demonstrate clear improvements in parallel performance and scalability (over a 1D partition). This is matched by a huge reduction in the time required to develop the parallel versions when hand coded – from weeks/months down to hours/days.
Resumo:
A comprehensive solution of solidification/melting processes requires the simultaneous representation of free surface fluid flow, heat transfer, phase change, nonlinear solid mechanics and, possibly, electromagnetics together with their interactions, in what is now known as multiphysics simulation. Such simulations are computationally intensive and the implementation of solution strategies for multiphysics calculations must embed their effective parallelization. For some years, together with our collaborators, we have been involved in the development of numerical software tools for multiphysics modeling on parallel cluster systems. This research has involved a combination of algorithmic procedures, parallel strategies and tools, plus the design of a computational modeling software environment and its deployment in a range of real world applications. One output from this research is the three-dimensional parallel multiphysics code, PHYSICA. In this paper we report on an assessment of its parallel scalability on a range of increasingly complex models drawn from actual industrial problems, on three contemporary parallel cluster systems.
Resumo:
Acantharian cysts were discovered in sediment trap samples from spring 2007 at 2000 m in the Iceland Basin. Although these single-celled organisms contribute to particulate organic matter flux in the upper mesopelagic, their contribution to bathypelagic particle flux has previously been found negligible. Four time-series sediment traps were deployed and all collected acantharian cysts, which are reproductive structures. Across all traps, cysts contributed on average 3-22%, and 4―24% of particulate organic carbon and nitrogen (POC and PON) flux, respectively, during three separate collection intervals (the maximum contribution in any one trap was 48% for POC and 59% for PON). Strontium (Sr) flux during these 6 weeks reached 3 mg m―2 d―1. The acantharian celestite (SrSO4) skeleton clearly does not always dissolve in the mesopelagic as often thought, and their cysts can contribute significantly to particle flux at bathypelagic depths during specific flux events. Their large size (∼ I mm) and mineral ballast result in a sinking rate of ∼ 500 m d―1; hence, they reach the bathypelagic before dissolving. Our findings are consistent with a vertical profile of salinity-normalized Sr concentration in the Iceland Basin, which shows a maximum at 1700 m. Profiles of salinity-normalized Sr concentration in the subarctic Pacific reach maxima at ≤ 1500 m, suggesting that Acantharia might contribute to the bathypelagic particle flux there as well. We hypothesize that Acantharia at high latitudes use rapid, deep sedimentation of reproductive cysts during phytoplankton blooms so that juveniles can exploit the large quantity of organic matter that sinks rapidly to the deep sea following a bloom.