918 resultados para buffer solution
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
Title: Data-Driven Text Generation using Neural Networks Speaker: Pavlos Vougiouklis, University of Southampton Abstract: Recent work on neural networks shows their great potential at tackling a wide variety of Natural Language Processing (NLP) tasks. This talk will focus on the Natural Language Generation (NLG) problem and, more specifically, on the extend to which neural network language models could be employed for context-sensitive and data-driven text generation. In addition, a neural network architecture for response generation in social media along with the training methods that enable it to capture contextual information and effectively participate in public conversations will be discussed. Speaker Bio: Pavlos Vougiouklis obtained his 5-year Diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki in 2013. He was awarded an MSc degree in Software Engineering from the University of Southampton in 2014. In 2015, he joined the Web and Internet Science (WAIS) research group of the University of Southampton and he is currently working towards the acquisition of his PhD degree in the field of Neural Network Approaches for Natural Language Processing. Title: Provenance is Complicated and Boring — Is there a solution? Speaker: Darren Richardson, University of Southampton Abstract: Paper trails, auditing, and accountability — arguably not the sexiest terms in computer science. But then you discover that you've possibly been eating horse-meat, and the importance of provenance becomes almost palpable. Having accepted that we should be creating provenance-enabled systems, the challenge of then communicating that provenance to casual users is not trivial: users should not have to have a detailed working knowledge of your system, and they certainly shouldn't be expected to understand the data model. So how, then, do you give users an insight into the provenance, without having to build a bespoke system for each and every different provenance installation? Speaker Bio: Darren is a final year Computer Science PhD student. He completed his undergraduate degree in Electronic Engineering at Southampton in 2012.
Resumo:
Combina historias de ficción y no ficción acerca de la contaminación. De qué manera afecta nuestra vida, cómo agua aire y suelo pueden verse afectadas y cómo se está tratando de resolver los problemas de contaminación tóxica en todo el mundo. Las preguntas abiertas estimulan a los estudiantes a reflexionar sobre estas cuestiones que afectan a las personas en diferentes partes del mundo y a formar sus propias opiniones.
Resumo:
Recurso para la asignatura de la Ciudadanía para alumnos entre once y dieciséis años. Está estructurado en cuatro secciones: las dos primeras se centran en el conocimiento de sí mismo y en la capacidad para manejar las emociones y las relaciones. Las unidades de la sección tres están diseñadas para ayudar a entender cómo desarrollar un estilo de vida saludable, más seguro, a pensar en las alternativas al tomar decisiones sobre sanidad personal y las consecuencias de tales decisiones. En la sección cuatro se trata la comprensión del mundo del trabajo y la capacidad financiera.
Resumo:
Libro dirigido a profesores que tengan que impartir la asignatura de 'Educación para la ciudadanía' en relación con 'Educación personal, social y de la salud' en el nivel KS4 (Key Stage 4), enseñanza secundaria. Presenta sugerencias de planificación y materiales adicionales para acompañar a los libros del alumno 'Your life 4' y 'Your life 5'. Cubre los siguientes temas: desarrollo como ciudadano, bienestar personal (comprenderse a si mismo y saber relacionarse), bienestar personal (mantener la salud), bienestar económico y capacidad financiera.
Resumo:
Los Centros de Investigación de Geografía son por lo general productores de un gran volumen de Información Geográfica (IG), los cuales generan tanto proyectos financiados como iniciativas de investigación individuales. El Centro de Estudos de Geografia e Planeamento Regional (e-GEO) ha estado involucrado en varios proyectos a escala local, regional, nacional e internacional. Recientemente, dos cuestiones fueron objeto de debate. Una de ellas fue el hecho de que la información espacial obtenida a partir del desarrollo de tales proyectos de investigación no ha tenido la visibilidad que se esperaba. En la mayoría de las veces, la IG de estos proyectos no estaba en el formato adecuado para que los investigadores -o incluso el público en general o grupos de interés- pudieran pesquisar fácilmente. La segunda cuestión era sobre cómo hacer que estos resultados pudieran ser accesibles al alcance de todos, en todos los lugares, fácilmente y con los mínimos costes para el Centro, teniendo en cuenta el actual contexto económico portugués y los intereses de e-GEO. Estas dos cuestiones se resuelven con una sola respuesta: la puesta en marcha de un WebGIS en una plataforma Open Source. En este trabajo se ilustra la producción de un instrumento para la difusión de las indicaciones geográficas en el World Wide Web, utilizando únicamente software libre y freeware. Esta herramienta permite a todos los investigadores del Centro publicar su IG, la cual aparece como plenamente accesible a cualquier usuario final. Potencialmente, el hecho de permitir que este tipo de información sea plenamente accesible debería generar un gran impacto, acortando las distancias entre el trabajo realizado por los académicos y el usuario final. Creemos que es una óptima manera para que el público pueda acceder e interpretar la información espacial. En conclusión, esta plataforma debería servir para cerrar la brecha entre productores y usuarios de la información geográfica, permitiendo la interacción entre todas las partes así como la carga de nuevos datos dado un conjunto de normas destinadas a control de calidad
Resumo:
The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.
Resumo:
In this paper a cell by cell anisotropic adaptive mesh technique is added to an existing staggered mesh Lagrange plus remap finite element ALE code for the solution of the Euler equations. The quadrilateral finite elements may be subdivided isotropically or anisotropically and a hierarchical data structure is employed. An efficient computational method is proposed, which only solves on the finest level of resolution that exists for each part of the domain with disjoint or hanging nodes being used at resolution transitions. The Lagrangian, equipotential mesh relaxation and advection (solution remapping) steps are generalised so that they may be applied on the dynamic mesh. It is shown that for a radial Sod problem and a two-dimensional Riemann problem the anisotropic adaptive mesh method runs over eight times faster.
Resumo:
We consider the imposition of Dirichlet boundary conditions in the finite element modelling of moving boundary problems in one and two dimensions for which the total mass is prescribed. A modification of the standard linear finite element test space allows the boundary conditions to be imposed strongly whilst simultaneously conserving a discrete mass. The validity of the technique is assessed for a specific moving mesh finite element method, although the approach is more general. Numerical comparisons are carried out for mass-conserving solutions of the porous medium equation with Dirichlet boundary conditions and for a moving boundary problem with a source term and time-varying mass.
Resumo:
The simulated annealing approach to structure solution from powder diffraction data, as implemented in the DASH program, is easily amenable to parallelization at the individual run level. Very large scale increases in speed of execution can therefore be achieved by distributing individual DASH runs over a network of computers. The GDASH program achieves this by packaging DASH in a form that enables it to run under the Univa UD Grid MP system, which harnesses networks of existing computing resources to perform calculations.
Resumo:
The simulated annealing approach to structure solution from powder diffraction data, as implemented in the DASH program, is easily amenable to parallelization at the individual run level. Modest increases in speed of execution can therefore be achieved by executing individual DASH runs on the individual cores of CPUs.
Resumo:
Alternative meshes of the sphere and adaptive mesh refinement could be immensely beneficial for weather and climate forecasts, but it is not clear how mesh refinement should be achieved. A finite-volume model that solves the shallow-water equations on any mesh of the surface of the sphere is presented. The accuracy and cost effectiveness of four quasi-uniform meshes of the sphere are compared: a cubed sphere, reduced latitude–longitude, hexagonal–icosahedral, and triangular–icosahedral. On some standard shallow-water tests, the hexagonal–icosahedral mesh performs best and the reduced latitude–longitude mesh performs well only when the flow is aligned with the mesh. The inclusion of a refined mesh over a disc-shaped region is achieved using either gradual Delaunay, gradual Voronoi, or abrupt 2:1 block-structured refinement. These refined regions can actually degrade global accuracy, presumably because of changes in wave dispersion where the mesh is highly nonuniform. However, using gradual refinement to resolve a mountain in an otherwise coarse mesh can improve accuracy for the same cost. The model prognostic variables are height and momentum collocated at cell centers, and (to remove grid-scale oscillations of the A grid) the mass flux between cells is advanced from the old momentum using the momentum equation. Quadratic and upwind biased cubic differencing methods are used as explicit corrections to a fast implicit solution that uses linear differencing.
Resumo:
The OECD 14 d earthworm acute toxicity test was used to determine the toxicity of copper added as copper nitrate (Cu(NO3)(2)), copper sulphate (CuSO4) and malachite (Cu-2(OH)(2)(CO3)) to Eisenia fetida Savigny. Cu(NO3)(2), and CuSO4 were applied in both an aqueous (aq) and solid (s) form, Cu-2(OH)(2)(CO3) was added as a solid. Soil solution was extracted by centrifugation, and analysed for copper. Two extractants [0.01 M CaCl2 and 0.005 M diethylenetriminpentaacetic acid (DTPA)] were used as a proxy of the bioavailable copper fraction in the soil. For bulk soil copper content the calculated copper toxicity decreased in the order nitrate > sulphide > carbonate, the same order as decreasing solubility of the metal compounds. For Cu(NO3)(2) and CuSO4, the LC50s obtained were not significantly different when the compound was added in solution or solid form. There was a significant correlation between the soil solution copper concentration and the percentage earthworm mortality for all 3 copper compounds (P less than or equal to 0.05) indicating that the soil pore water copper concentration is important for determining copper availability and toxicity to E. fetida. In soil avoidance tests the earthworms avoided the soils treated with Cu(NO3)(2) (aq and s) and CuSO4 (aq and s), at all concentrations used (110-8750 mug Cu g(-1), and 600-8750 mug Cu g(-1) respectively). In soils treated with Cu-2(OH2)CO3, avoidance behaviour was exhibited at all concentrations greater than or equal to3500 mug Cu g(-1). There was no significant correlation between the copper extracted by either CaCl2 or DTPA and percentage mortality. These two extractants are therefore not useful indicators of copper availability and toxicity to E. fetida.