980 resultados para Uniformly Convex


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article describes a neural network model that addresses the acquisition of speaking skills by infants and subsequent motor equivalent production of speech sounds. The model learns two mappings during a babbling phase. A phonetic-to-orosensory mapping specifies a vocal tract target for each speech sound; these targets take the form of convex regions in orosensory coordinates defining the shape of the vocal tract. The babbling process wherein these convex region targets are formed explains how an infant can learn phoneme-specific and language-specific limits on acceptable variability of articulator movements. The model also learns an orosensory-to-articulatory mapping wherein cells coding desired movement directions in orosensory space learn articulator movements that achieve these orosensory movement directions. The resulting mapping provides a natural explanation for the formation of coordinative structures. This mapping also makes efficient use of redundancy in the articulator system, thereby providing the model with motor equivalent capabilities. Simulations verify the model's ability to compensate for constraints or perturbations applied to the articulators automatically and without new learning and to explain contextual variability seen in human speech production.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The class of all Exponential-Polynomial-Trigonometric (EPT) functions is classical and equal to the Euler-d’Alembert class of solutions of linear differential equations with constant coefficients. The class of non-negative EPT functions defined on [0;1) was discussed in Hanzon and Holland (2010) of which EPT probability density functions are an important subclass. EPT functions can be represented as ceAxb, where A is a square matrix, b a column vector and c a row vector where the triple (A; b; c) is the minimal realization of the EPT function. The minimal triple is only unique up to a basis transformation. Here the class of 2-EPT probability density functions on R is defined and shown to be closed under a variety of operations. The class is also generalised to include mixtures with the pointmass at zero. This class coincides with the class of probability density functions with rational characteristic functions. It is illustrated that the Variance Gamma density is a 2-EPT density under a parameter restriction. A discrete 2-EPT process is a process which has stochastically independent 2-EPT random variables as increments. It is shown that the distribution of the minimum and maximum of such a process is an EPT density mixed with a pointmass at zero. The Laplace Transform of these distributions correspond to the discrete time Wiener-Hopf factors of the discrete time 2-EPT process. A distribution of daily log-returns, observed over the period 1931-2011 from a prominent US index, is approximated with a 2-EPT density function. Without the non-negativity condition, it is illustrated how this problem is transformed into a discrete time rational approximation problem. The rational approximation software RARL2 is used to carry out this approximation. The non-negativity constraint is then imposed via a convex optimisation procedure after the unconstrained approximation. Sufficient and necessary conditions are derived to characterise infinitely divisible EPT and 2-EPT functions. Infinitely divisible 2-EPT density functions generate 2-EPT Lévy processes. An assets log returns can be modelled as a 2-EPT Lévy process. Closed form pricing formulae are then derived for European Options with specific times to maturity. Formulae for discretely monitored Lookback Options and 2-Period Bermudan Options are also provided. Certain Greeks, including Delta and Gamma, of these options are also computed analytically. MATLAB scripts are provided for calculations involving 2-EPT functions. Numerical option pricing examples illustrate the effectiveness of the 2-EPT approach to financial modelling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bacteriophages, viruses infecting bacteria, are uniformly present in any location where there are high numbers of bacteria, both in the external environment and the human body. Knowledge of their diversity is limited by the difficulty to culture the host species and by the lack of the universal marker gene present in all viruses. Metagenomics is a powerful tool that can be used to analyse viral communities in their natural environments. The aim of this study was to investigate diverse populations of uncultured viruses from clinical (a sputum of patient with cystic fibrosis, CF) and environmental samples (a sludge from a dairy food wastewater treatment plant) containing rich bacterial populations using genetic and metagenomic analyses. Metagenomic sequencing of viruses obtained from these samples revealed that the majority of the metagenomic reads (97-99%) were novel when compared to the NCBI protein database using BLAST. A large proportion of assembled contigs were assignable as novel phages or uncharacterised prophages, the next largest assignable group being single-stranded eukaryotic virus genomes. Sputum from a cystic fibrosis patient contained DNA typical of phages of bacteria that are traditionally involved in CF lung infections and other bacteria that are part of the normal oral flora. The only eukaryotic virus detected in the CF sputum was Torque Teno virus (TTV). A substantial number of assigned sequences from dairy wastewater could be affiliated with phages of bacteria that are typically found in the soil and aquatic environments, including wastewater. Eukaryotic viral sequences were dominated by plant pathogens from the Geminiviridae and Nanoviridae families, and animal pathogens from the Circoviridae family. Antibiotic resistance genes were detected in both metagenomes suggesting phages could be a source for transmissible antimicrobial resistance. Overall, diversity of viruses in the CF sputum was low, with 89 distinct viral genotypes predicted, and higher (409 genotypes) in the wastewater. Function-based screening of a metagenomic library constructed from DNA extracted from dairy food wastewater viruses revealed candidate promoter sequences that have ability to drive expression of GFP in a promoter-trap vector in Escherichia coli. The majority of the cloned DNA sequences selected by the assay were related to ssDNA circular eukaryotic viruses and phages which formed a minority of the metagenome assembly, and many lacked any significant homology to known database sequences. Natural diversity of bacteriophages in wastewater samples was also examined by PCR amplification of the major capsid protein sequences, conserved within T4-type bacteriophages from Myoviridae family. Phylogenetic analysis of capsid sequences revealed that dairy wastewater contained mainly diverse and uncharacterized phages, while some showed a high level of similarity with phages from geographically distant environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By using Si(100) with different dopant type (n++-type (As) or p-type (B)), it is shown how metal-assisted chemically (MAC) etched silicon nanowires (Si NWs) can form with rough outer surfaces around a solid NW core for p-type NWs, and a unique, defined mesoporous structure for highly doped n-type NWs. High resolution electron microscopy techniques were used to define the characteristic roughening and mesoporous structure within the NWs and how such structures can form due to a judicious choice of carrier concentration and dopant type. Control of roughness and internal mesoporosity is demonstrated during the formation of Si NWs from highly doped n-type Si(100) during electroless etching through a systematic investigation of etching parameters (etching time, AgNO3 concentration, %HF and temperature). Raman scattering measurements of the transverse optical phonon confirm quantum size effects and phonon scattering in mesoporous wires associated with the etching condition, including quantum confinement effects for the nanocrystallites of Si comprising the internal structure of the mesoporous NWs. Laser power heating of NWs confirms phonon confinement and scattering from internal mesoporosity causing reduced thermal conductivity. The Li+ insertion and extraction characteristics at n-type and p-type Si(100) electrodes with different carrier density and doping type are investigated by cyclic voltammetry and constant current measurements. The insertion and extraction potentials are demonstrated to vary with cycling and the occurrence of an activation effect is shown in n-type electrodes where the charge capacity and voltammetric currents are found to be much higher than p-type electrodes. X-ray photo-electron spectroscopy (XPS) and Raman scattering demonstrate that highly doped n-type Si(100) retains Li as a silicide and converts to an amorphous phase as a two-step phase conversion process. The findings show the succinct dependence of Li insertion and extraction processes for uniformly doped Si(100) single crystals and how the doping type and its effect on the semiconductor-solution interface dominate Li insertion and extraction, composition, crystallinity changes and charge capacity. The effect of dopant, doping density and porosity of MAC etched Si NWs are investigated. The CV response is shown to change in area (current density) with increasing NW length and in profile shape with a changing porosity of the Si NWs. The CV response also changes with scan rate indicative of a transition from intercalation or alloying reactions, to pseudocapactive charge storage at higher scan rates and for p-type NWs. SEM and TEM show a change in structure of the NWs after Li insertion and extraction due to expansion and contraction of the Si NWs. Galvanostatic measurements show the cycling behavior and the Coulombic efficiency of the Si NWs in comparison to their bulk counterparts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internal tandem duplication of FMS-like receptor tyrosine kinase (FLT3-ITD) has been associated with an aggressive AML phenotype. FLT3-ITD expressing cell lines have been shown to generate increased levels of reactive oxygen species (ROS) and DNA double strand breaks (dsbs). However, the molecular basis of how FLT3-ITD-driven ROS leads to the aggressive form of AML is not clearly understood. Herein, we observe that the majority of H2O2 in FLT3-ITD-expressing MV4-11 cells colocalises to the endoplasmic reticulum (ER). Furthermore, ER localisation of ROS in MV4-11 cells corresponds to the localisation of p22phox, a small membrane-bound subunit of NOX complex. Furthermore, we show that 32D cells, a myeloblast-like cell line transfected with FLT3-ITD, possess higher steady protein levels of p22phox than their wild type FLT3 (FLT3-WT)-expressing counterparts. Moreover, the inhibition of FLT3-ITD, using various FLT3 tyrosine kinase inhibitors, uniformly results in a posttranslational downregulation of p22phox. We also show that depletion of NOX2 and NOX4 and p22phox, but not NOX1 proteins causes a reduction in endogenous H2O2 levels. We show that genomic instability induced by FLT3-ITD leads to an increase in nuclear levels of H2O2. The presence of H2O2 in the nucleus is largely reduced by inhibition of FLT3-ITD or NOX. Furthermore, similar results are also observed following siRNA knockdowns of p22phox or NOX4. We demonstrate that 32D cells transfected with FLT3-ITD have a higher level of DNA damage than 32D cells transfected with FLT3-WT. Additionally, inhibition of FLT3-ITD, p22phox and NOX knockdowns decrease the number of DNA dsbs. In summary, this study presents a novel mechanism of genomic instability generation in FLT3-ITD-expressing AML cells, whereby FLT3-ITD activates NOX complexes by stabilising p22phox. This in turn leads to elevated generation of ROS and DNA damage in these cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We revisit the well-known problem of sorting under partial information: sort a finite set given the outcomes of comparisons between some pairs of elements. The input is a partially ordered set P, and solving the problem amounts to discovering an unknown linear extension of P, using pairwise comparisons. The information-theoretic lower bound on the number of comparisons needed in the worst case is log e(P), the binary logarithm of the number of linear extensions of P. In a breakthrough paper, Jeff Kahn and Jeong Han Kim (STOC 1992) showed that there exists a polynomial-time algorithm for the problem achieving this bound up to a constant factor. Their algorithm invokes the ellipsoid algorithm at each iteration for determining the next comparison, making it impractical. We develop efficient algorithms for sorting under partial information. Like Kahn and Kim, our approach relies on graph entropy. However, our algorithms differ in essential ways from theirs. Rather than resorting to convex programming for computing the entropy, we approximate the entropy, or make sure it is computed only once in a restricted class of graphs, permitting the use of a simpler algorithm. Specifically, we present: an O(n2) algorithm performing O(log n·log e(P)) comparisons; an O(n2.5) algorithm performing at most (1+ε) log e(P) + Oε(n) comparisons; an O(n2.5) algorithm performing O(log e(P)) comparisons. All our algorithms are simple to implement. © 2010 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset. © 1963-2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Continuing our development of a mathematical theory of stochastic microlensing, we study the random shear and expected number of random lensed images of different types. In particular, we characterize the first three leading terms in the asymptotic expression of the joint probability density function (pdf) of the random shear tensor due to point masses in the limit of an infinite number of stars. Up to this order, the pdf depends on the magnitude of the shear tensor, the optical depth, and the mean number of stars through a combination of radial position and the star's mass. As a consequence, the pdf's of the shear components are seen to converge, in the limit of an infinite number of stars, to shifted Cauchy distributions, which shows that the shear components have heavy tails in that limit. The asymptotic pdf of the shear magnitude in the limit of an infinite number of stars is also presented. All the results on the random microlensing shear are given for a general point in the lens plane. Extending to the general random distributions (not necessarily uniform) of the lenses, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to microlensing, we calculate the asymptotic global expected number of minimum images in the limit of an infinite number of stars, where the stars are uniformly distributed. This global expectation is bounded, while the global expected number of images and the global expected number of saddle images diverge as the order of the number of stars. © 2009 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Here we show that the configuration of a slender enclosure can be optimized such that the radiation heating of a stream of solid is performed with minimal fuel consumption at the global level. The solid moves longitudinally at constant rate through the enclosure. The enclosure is heated by gas burners distributed arbitrarily, in a manner that is to be determined. The total contact area for heat transfer between the hot enclosure and the cold solid is fixed. We find that minimal global fuel consumption is achieved when the longitudinal distribution of heaters is nonuniform, with more heaters near the exit than the entrance. The reduction in fuel consumption relative to when the heaters are distributed uniformly is of order 10%. Tapering the plan view (the floor) of the heating area yields an additional reduction in overall fuel consumption. The best shape is when the floor area is a slender triangle on which the cold solid enters by crossing the base. These architectural features recommend the proposal to organize the flow of the solid as a dendritic design, which enters as several branches, and exits as a single hot stream of prescribed temperature. The thermodynamics of heating is presented in modern terms in the Sec. (exergy destruction, entropy generation). The contribution is that to optimize "thermodynamically" is the same as reducing the consumption of fuel. © 2010 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We apply a coded aperture snapshot spectral imager (CASSI) to fluorescence microscopy. CASSI records a two-dimensional (2D) spectrally filtered projection of a three-dimensional (3D) spectral data cube. We minimize a convex quadratic function with total variation (TV) constraints for data cube estimation from the 2D snapshot. We adapt the TV minimization algorithm for direct fluorescent bead identification from CASSI measurements by combining a priori knowledge of the spectra associated with each bead type. Our proposed method creates a 2D bead identity image. Simulated fluorescence CASSI measurements are used to evaluate the behavior of the algorithm. We also record real CASSI measurements of a ten bead type fluorescence scene and create a 2D bead identity map. A baseline image from filtered-array imaging system verifies CASSI's 2D bead identity map.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe an active millimeter-wave holographic imaging system that uses compressive measurements for three-dimensional (3D) tomographic object estimation. Our system records a two-dimensional (2D) digitized Gabor hologram by translating a single pixel incoherent receiver. Two approaches for compressive measurement are undertaken: nonlinear inversion of a 2D Gabor hologram for 3D object estimation and nonlinear inversion of a randomly subsampled Gabor hologram for 3D object estimation. The object estimation algorithm minimizes a convex quadratic problem using total variation (TV) regularization for 3D object estimation. We compare object reconstructions using linear backpropagation and TV minimization, and we present simulated and experimental reconstructions from both compressive measurement strategies. In contrast with backpropagation, which estimates the 3D electromagnetic field, TV minimization estimates the 3D object that produces the field. Despite undersampling, range resolution is consistent with the extent of the 3D object band volume.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a mathematical analysis of the asymptotic preserving scheme proposed in [M. Lemou and L. Mieussens, SIAM J. Sci. Comput., 31 (2008), pp. 334-368] for linear transport equations in kinetic and diffusive regimes. We prove that the scheme is uniformly stable and accurate with respect to the mean free path of the particles. This property is satisfied under an explicitly given CFL condition. This condition tends to a parabolic CFL condition for small mean free paths and is close to a convection CFL condition for large mean free paths. Our analysis is based on very simple energy estimates. © 2010 Society for Industrial and Applied Mathematics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our approach connects closely to the theory of convex risk measures. We show that the complexity of this approach is equivalent to that of solving a small number of standard robust problems. We then investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate theme thodology on an asset allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly favorable risk-return trade-off: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance. © 2010 INFORMS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The spatial variability of aerosol number and mass along roads was determined in different regions (urban, rural and coastal-marine) of the Netherlands. A condensation particle counter (CPC) and an optical aerosol spectrometer (LAS-X) were installed in a van along with a global positioning system (GPS). Concentrations were measured with high-time resolutions while driving allowing investigations not possible with stationary equipment. In particular, this approach proves to be useful to identify those locations where numbers and mass attain high levels ('hot spots'). In general, concentrations of number and mass of particulate matter increase along with the degree of urbanisation, with number concentration being the more sensitive indicator. The lowest particle numbers and PM1-concentrations are encountered in a coastal and rural area: <5000cm-3 and 6μgm-3, respectively. The presence of sea-salt material along the North-Sea coast enhances PM>1-concentrations compared to inland levels. High-particle numbers are encountered on motorways correlating with traffic intensity; the largest average number concentration is measured on the ring motorway around Amsterdam: about 160000cm-3 (traffic intensity 100000vehday-1). Peak values occur in tunnels where numbers exceed 106cm-3. Enhanced PM1 levels (i.e. larger than 9μgm-3) exist on motorways, major traffic roads and in tunnels. The concentrations of PM>1 appear rather uniformly distributed (below 6μgm-3 for most observations). On the urban scale, (large) spatial variations in concentration can be explained by varying intensities of traffic and driving patterns. The highest particle numbers are measured while being in traffic congestions or when behind a heavy diesel-driven vehicle (up to 600×103cm-3). Relatively high numbers are observed during the passages of crossings and, at a decreasing rate, on main roads with much traffic, quiet streets and residential areas with limited traffic. The number concentration exhibits a larger variability than mass: the mass concentration on city roads with much traffic is 12% higher than in a residential area at the edge of the same city while the number of particles changes by a factor of two (due to the presence of the ultrafine particles (aerodynamic diameter <100nm). It is further indicated that people residing at some 100m downwind a major traffic source are exposed to (still) 40% more particles than those living in the urban background areas. © 2004 Elsevier Ltd. All rights reserved.