829 resultados para Two Approaches
Resumo:
Mark Pagel, Andrew Meade (2004). A phylogenetic mixture model for detecting pattern-heterogeneity in gene sequence or character-state data. Systematic Biology, 53(4), 571-581. RAE2008
Resumo:
Accurate measurement of network bandwidth is crucial for flexible Internet applications and protocols which actively manage and dynamically adapt to changing utilization of network resources. These applications must do so to perform tasks such as distributing and delivering high-bandwidth media, scheduling service requests and performing admission control. Extensive work has focused on two approaches to measuring bandwidth: measuring it hop-by-hop, and measuring it end-to-end along a path. Unfortunately, best-practice techniques for the former are inefficient and techniques for the latter are only able to observe bottlenecks visible at end-to-end scope. In this paper, we develop and simulate end-to-end probing methods which can measure bottleneck bandwidth along arbitrary, targeted subpaths of a path in the network, including subpaths shared by a set of flows. As another important contribution, we describe a number of practical applications which we foresee as standing to benefit from solutions to this problem, especially in emerging, flexible network architectures such as overlay networks, ad-hoc networks, peer-to-peer architectures and massively accessed content servers.
Resumo:
We revisit the problem of connection management for reliable transport. At one extreme, a pure soft-state (SS) approach (as in Delta-t [9]) safely removes the state of a connection at the sender and receiver once the state timers expire without the need for explicit removal messages. And new connections are established without an explicit handshaking phase. On the other hand, a hybrid hard-state/soft-state (HS+SS) approach (as in TCP) uses both explicit handshaking as well as timer-based management of the connection’s state. In this paper, we consider the worst-case scenario of reliable single-message communication, and develop a common analytical model that can be instantiated to capture either the SS approach or the HS+SS approach. We compare the two approaches in terms of goodput, message and state overhead. We also use simulations to compare against other approaches, and evaluate them in terms of correctness (with respect to data loss and duplication) and robustness to bad network conditions (high message loss rate and variable channel delays). Our results show that the SS approach is more robust, and has lower message overhead. On the other hand, SS requires more memory to keep connection states, which reduces goodput. Given memories are getting bigger and cheaper, SS presents the best choice over bandwidth-constrained, error-prone networks.
Resumo:
Integrated nanowire electrodes that permit direct, sensitive and rapid electrochemical based detection of chemical and biological species are a powerful emerging class of sensor devices. As critical dimensions of the electrodes enter the nanoscale, radial analyte diffusion profiles to the electrode dominate with a corresponding enhancement in mass transport, steady-state sigmoidal voltammograms, low depletion of target molecules and faster analysis. To optimise these sensors it is necessary to fully understand the factors that influence performance limits including: electrode geometry, electrode dimensions, electrode separation distances (within nanowire arrays) and diffusional mass transport. Therefore, in this thesis, theoretical simulations of analyte diffusion occurring at a variety of electrode designs were undertaken using Comsol Multiphysics®. Sensor devices were fabricated and corresponding experiments were performed to challenge simulation results. Two approaches for the fabrication and integration of metal nanowire electrodes are presented: Template Electrodeposition and Electron-Beam Lithography. These approaches allow for the fabrication of nanowires which may be subsequently integrated at silicon chip substrates to form fully functional electrochemical devices. Simulated and experimental results were found to be in excellent agreement validating the simulation model. The electrochemical characteristics exhibited by nanowire electrodes fabricated by electronbeam lithography were directly compared against electrochemical performance of a commercial ultra-microdisc electrode. Steady-state cyclic voltammograms in ferrocenemonocarboxylic acid at single ultra-microdisc electrodes were observed at low to medium scan rates (≤ 500 mV.s-1). At nanowires, steady-state responses were observed at ultra-high scan rates (up to 50,000 mV.s-1), thus allowing for much faster analysis (20 ms). Approaches for elucidating faradaic signal without the requirement for background subtraction were also developed. Furthermore, diffusional process occurring at arrays with increasing inter-electrode distance and increasing number of nanowires were explored. Diffusion profiles existing at nanowire arrays were simulated with Comsol Multiphysics®. A range of scan rates were modelled, and experiments were undertaken at 5,000 mV.s-1 since this allows rapid data capture required for, e.g., biomedical, environmental and pharmaceutical diagnostic applications.
Resumo:
Solar Energy is a clean and abundant energy source that can help reduce reliance on fossil fuels around which questions still persist about their contribution to climate and long-term availability. Monolithic triple-junction solar cells are currently the state of the art photovoltaic devices with champion cell efficiencies exceeding 40%, but their ultimate efficiency is restricted by the current-matching constraint of series-connected cells. The objective of this thesis was to investigate the use of solar cells with lattice constants equal to InP in order to reduce the constraint of current matching in multi-junction solar cells. This was addressed by two approaches: Firstly, the formation of mechanically stacked solar cells (MSSC) was investigated through the addition of separate connections to individual cells that make up a multi-junction device. An electrical and optical modelling approach identified separately connected InGaAs bottom cells stacked under dual-junction GaAs based top cells as a route to high efficiency. An InGaAs solar cell was fabricated on an InP substrate with a measured 1-Sun conversion efficiency of 9.3%. A comparative study of adhesives found benzocyclobutene to be the most suitable for bonding component cells in a mechanically stacked configuration owing to its higher thermal conductivity and refractive index when compared to other candidate adhesives. A flip-chip process was developed to bond single-junction GaAs and InGaAs cells with a measured 4-terminal MSSC efficiency of 25.2% under 1-Sun conditions. Additionally, a novel InAlAs solar cell was identified, which can be used to provide an alternative to the well established GaAs solar cell. As wide bandgap InAlAs solar cells have not been extensively investigated for use in photovoltaics, single-junction cells were fabricated and their properties relevant to PV operation analysed. Minority carrier diffusion lengths in the micrometre range were extracted, confirming InAlAs as a suitable material for use in III-V solar cells, and a 1-Sun conversion efficiency of 6.6% measured for cells with 800 nm thick absorber layers. Given the cost and small diameter of commercially available InP wafers, InGaAs and InAlAs solar cells were fabricated on alternative substrates, namely GaAs. As a first demonstration the lattice constant of a GaAs substrate was graded to InP using an InxGa1-xAs metamorphic buffer layer onto which cells were grown. This was the first demonstration of an InAlAs solar cell on an alternative substrate and an initial step towards fabricating these cells on Si. The results presented offer a route to developing multi-junction solar cell devices based on the InP lattice parameter, thus extending the range of available bandgaps for high efficiency cells.
Resumo:
We describe an active millimeter-wave holographic imaging system that uses compressive measurements for three-dimensional (3D) tomographic object estimation. Our system records a two-dimensional (2D) digitized Gabor hologram by translating a single pixel incoherent receiver. Two approaches for compressive measurement are undertaken: nonlinear inversion of a 2D Gabor hologram for 3D object estimation and nonlinear inversion of a randomly subsampled Gabor hologram for 3D object estimation. The object estimation algorithm minimizes a convex quadratic problem using total variation (TV) regularization for 3D object estimation. We compare object reconstructions using linear backpropagation and TV minimization, and we present simulated and experimental reconstructions from both compressive measurement strategies. In contrast with backpropagation, which estimates the 3D electromagnetic field, TV minimization estimates the 3D object that produces the field. Despite undersampling, range resolution is consistent with the extent of the 3D object band volume.
Resumo:
This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Our first goal is to clarify when, and how, multiplicity correction happens automatically in Bayesian analysis, and to distinguish this correction from the Bayesian Ockham's-razor effect. Our second goal is to contrast empirical-Bayes and fully Bayesian approaches to variable selection through examples, theoretical results and simulations. Considerable differences between the two approaches are found. In particular, we prove a theorem that characterizes a surprising aymptotic discrepancy between fully Bayes and empirical Bayes. This discrepancy arises from a different source than the failure to account for hyperparameter uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when the empirical-Bayes estimate converges asymptotically to the true variable-inclusion probability, the potential for a serious difference remains. © Institute of Mathematical Statistics, 2010.
Resumo:
Nutrient availability profoundly influences gene expression. Many animal genes encode multiple transcript isoforms, yet the effect of nutrient availability on transcript isoform expression has not been studied in genome-wide fashion. When Caenorhabditis elegans larvae hatch without food, they arrest development in the first larval stage (L1 arrest). Starved larvae can survive L1 arrest for weeks, but growth and post-embryonic development are rapidly initiated in response to feeding. We used RNA-seq to characterize the transcriptome during L1 arrest and over time after feeding. Twenty-seven percent of detectable protein-coding genes were differentially expressed during recovery from L1 arrest, with the majority of changes initiating within the first hour, demonstrating widespread, acute effects of nutrient availability on gene expression. We used two independent approaches to track expression of individual exons and mRNA isoforms, and we connected changes in expression to functional consequences by mining a variety of databases. These two approaches identified an overlapping set of genes with alternative isoform expression, and they converged on common functional patterns. Genes affecting mRNA splicing and translation are regulated by alternative isoform expression, revealing post-transcriptional consequences of nutrient availability on gene regulation. We also found that phosphorylation sites are often alternatively expressed, revealing a common mode by which alternative isoform expression modifies protein function and signal transduction. Our results detail rich changes in C. elegans gene expression as larvae initiate growth and post-embryonic development, and they provide an excellent resource for ongoing investigation of transcriptional regulation and developmental physiology.
Resumo:
Thin-layer and high-performance thin-layer chromatography (TLC/HPTLC) methods for assaying compound(s) in a sample must be validated to ensure that they are fit for their intended purpose and, where applicable, meet the strict regulatory requirements for controlled products. Two validation approaches are identified in the literature, i.e. the classic and the alternative, which is using accuracy profiles.Detailed procedures of the two approaches are discussed based on the validation of methods for pharmaceutical analysis, which is an area considered having more strict requirements. Estimation of the measurement uncertainty from the validation approach using accuracy profiles is also described.Examples of HPTLC methods, developed and validated to assay sulfamethoxazole and trimethoprim on the one hand and lamivudine, stavudine, and nevirapine on the other, in their fixed-dose combination tablets, are further elaborated.
Resumo:
One of the fundamental questions regarding the temporal ontology is what is time composed of. While the traditional time structure is based on a set of points, a notion that has been prevalently adopted in classical physics and mathematics, it has also been noticed that intervals have been widely adopted for expre~sion of common sense temporal knowledge, especially in the domain of artificial intelligence. However, there has been a longstanding debate on how intervals should be addressed, leading to two different approaches to the treatment of intervals. In the first, intervals are addressed as derived objects constructed from points, e.g., as sets of points, or as pairs of points. In the second, intervals are taken as primitive themselves. This article provides a critical examination of these two approaches. By means of proposing a definition of intervals in terms of points and types, we shall demonstrate that, while the two different approaches have been viewed as rivals in the literature, they are actually reducible to logically equivalent expressions under some requisite interpretations, and therefore they can also be viewed as allies.
Resumo:
The dispersion of a patch of the tracer sulfur hexafluoride (SF6) is used to assess the lateral diffusivity in the coastal waters of the western part of the Gulf of Lion (GoL), northwestern Mediterranean Sea, during the Latex10 experiment (September 2010). Immediately after the release, the spreading of the patch is associated with a strong decrease of the SF6 concentrations due to the gas exchange from the ocean to the atmosphere. This has been accurately quantified, evidencing the impact of the strong wind conditions during the first days of this campaign. Few days after the release, as the atmospheric loss of SF6 decreased, lateral diffusivity coefficient at spatial scales of 10 km has been computed using two approaches. First, the evolution of the patch with time was combined with a diffusion-strain model to obtain estimates of the strain rate (γ = 2.5 10- 6 s- 1) and of the lateral diffusivity coefficient (Kh = 23.2 m2s− 1). Second, a steady state model was applied, showing Kh values similar to the previous method after a period of adjustment between 2 and 4.5 days. This implies that after such period, our computation of Kh becomes insensitive to the inclusion of further straining of the patch. Analysis of sea surface temperature satellite imagery shows the presence of a strong front in the study area. The front clearly affected the dynamics within the region and thus the temporal evolution of the patch. Our results are consistent with previous studies in open ocean and demonstrate the success and feasibility of those methods also under small-scale, rapidly-evolving dynamics typical of coastal environments.
Resumo:
We investigate the ability of the local density approximation (LDA) in density functional theory to predict the near-edge structure in electron energy-loss spectroscopy in the dipole approximation. We include screening of the core hole within the LDA using Slater's transition state theory. We find that anion K-edge threshold energies are systematically overestimated by 4.22 +/- 0.44 eV in twelve transition metal carbides and nitrides in the rock-salt (B1) structure. When we apply this 'universal' many-electron correction to energy-loss spectra calculated within the transition state approximation to LDA, we find quantitative agreement with experiment to within one or two eV for TiC, TiN and VN. We compare our calculations to a simpler approach using a projected Mulliken density which honours the dipole selection rule, in place of the dipole matrix element itself. We find remarkably close agreement between these two approaches. Finally, we show an anomaly in the near-edge structure in CrN to be due to magnetic structure. In particular, we find that the N K edge in fact probes the magnetic moments and alignments of ther sublattice.
Resumo:
Incidence calculus is a mechanism for probabilistic reasoning in which sets of possible worlds, called incidences, are associated with axioms, and probabilities are then associated with these sets. Inference rules are used to deduce bounds on the incidence of formulae which are not axioms, and bounds for the probability of such a formula can then be obtained. In practice an assignment of probabilities directly to axioms may be given, and it is then necessary to find an assignment of incidence which will reproduce these probabilities. We show that this task of assigning incidences can be viewed as a tree searching problem, and two techniques for performing this research are discussed. One of these is a new proposal involving a depth first search, while the other incorporates a random element. A Prolog implementation of these methods has been developed. The two approaches are compared for efficiency and the significance of their results are discussed. Finally we discuss a new proposal for applying techniques from linear programming to incidence calculus.
Resumo:
En distintos momentos de su producción intelectual, Agustín se refirió al asunto de la música. En este artículo mostraremos que Agustín uso dos esquemas conceptuales distintos para describir el fenómeno de la música práctica en su relación con el mundo espiritual, el esquema de las Artes Liberales y el de la teoría del signo, y que en virtud de ello la música sería concebida de dos modos diferentes: como vestigium y como signum del mundo espiritual, respectivamente. Al final del artículo analizaremos la diferencia entre las dos concepciones considerando tres elementos: la naturaleza de la relación entre mundo material y espiritual, el contenido espiritual al que remite y la noción de belleza que implica.
Resumo:
A 2D isothermal finite element simulation of the injection stretch-blow molding (ISBM) process for polyethylene terephthalate (PET) containers has been developed through the commercial finite element package ABAQUS/standard. In this work, the blowing air to inflate the PET preform was modeled through two different approaches: a direct pressure input (as measured in the blowing machine) and a constant mass flow rate input (based on a pressure-volume-time relationship). The results from these two approaches were validated against free blow and free stretch-blow experiments, which were instrumented and monitored through high-speed video. Results show that simulation using a constant mass flow rate approach gave a better prediction of volume vs. time curve and preform shape evolution when compared with the direct pressure approach and hence is more appropriate in modeling the preblowing stage in the injection stretch-blow molding process