941 resultados para user-defined function (UDF)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we bound the generalization error of a class of Radial Basis Function networks, for certain well defined function learning tasks, in terms of the number of parameters and number of examples. We show that the total generalization error is partly due to the insufficient representational capacity of the network (because of its finite size) and partly due to insufficient information about the target function (because of finite number of samples). We make several observations about generalization error which are valid irrespective of the approximation scheme. Our result also sheds light on ways to choose an appropriate network architecture for a particular problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Type-omega DPLs (Denotational Proof Languages) are languages for proof presentation and search that offer strong soundness guarantees. LCF-type systems such as HOL offer similar guarantees, but their soundness relies heavily on static type systems. By contrast, DPLs ensure soundness dynamically, through their evaluation semantics; no type system is necessary. This is possible owing to a novel two-tier syntax that separates deductions from computations, and to the abstraction of assumption bases, which is factored into the semantics of the language and allows for sound evaluation. Every type-omega DPL properly contains a type-alpha DPL, which can be used to present proofs in a lucid and detailed form, exclusively in terms of primitive inference rules. Derived inference rules are expressed as user-defined methods, which are "proof recipes" that take arguments and dynamically perform appropriate deductions. Methods arise naturally via parametric abstraction over type-alpha proofs. In that light, the evaluation of a method call can be viewed as a computation that carries out a type-alpha deduction. The type-alpha proof "unwound" by such a method call is called the "certificate" of the call. Certificates can be checked by exceptionally simple type-alpha interpreters, and thus they are useful whenever we wish to minimize our trusted base. Methods are statically closed over lexical environments, but dynamically scoped over assumption bases. They can take other methods as arguments, they can iterate, and they can branch conditionally. These capabilities, in tandem with the bifurcated syntax of type-omega DPLs and their dynamic assumption-base semantics, allow the user to define methods in a style that is disciplined enough to ensure soundness yet fluid enough to permit succinct and perspicuous expression of arbitrarily sophisticated derived inference rules. We demonstrate every major feature of type-omega DPLs by defining and studying NDL-omega, a higher-order, lexically scoped, call-by-value type-omega DPL for classical zero-order natural deduction---a simple choice that allows us to focus on type-omega syntax and semantics rather than on the subtleties of the underlying logic. We start by illustrating how type-alpha DPLs naturally lead to type-omega DPLs by way of abstraction; present the formal syntax and semantics of NDL-omega; prove several results about it, including soundness; give numerous examples of methods; point out connections to the lambda-phi calculus, a very general framework for type-omega DPLs; introduce a notion of computational and deductive cost; define several instrumented interpreters for computing such costs and for generating certificates; explore the use of type-omega DPLs as general programming languages; show that DPLs do not have to be type-less by formulating a static Hindley-Milner polymorphic type system for NDL-omega; discuss some idiosyncrasies of type-omega DPLs such as the potential divergence of proof checking; and compare type-omega DPLs to other approaches to proof presentation and discovery. Finally, a complete implementation of NDL-omega in SML-NJ is given for users who want to run the examples and experiment with the language.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent efforts in the finite element modelling of delamination have concentrated on the development of cohesive interface elements. These are characterised by a bilinear constitutive law, where there is an initial high positive stiffness until a threshold stress level is reached, followed by a negative tangent stiffness representing softening (or damage evolution). Complete decohesion occurs when the amount of work done per unit area of crack surface is equal to a critical strain energy release rate. It is difficult to achieve a stable, oscillation-free solution beyond the onset of damage, using standard implicit quasi-static methods, unless a very refined mesh is used. In the present paper, a new solution strategy is proposed based on a pseudo-transient formulation and demonstrated through the modelling of a double cantilever beam undergoing Mode I delamination. A detailed analysis into the sensitivity of the user-defined parameters is also presented. Comparisons with other published solutions using a quasi-static formulation show that the pseudo-transient formulation gives improved accuracy and oscillation-free results with coarser meshes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From a review of technical literature, it was not apparent if the Lagrangian or the Eulerian dispersed phase modeling approach was more valid to simulate dilute erosive slurry flow. In this study, both modeling approaches were employed and a comparative analysis of performances and accuracy between the two models was carried out. Due to an impossibility to define, for the Eulerian model already implemented in FLUENT, a set of boundary conditions consistent with the Lagrangian impulsive equations, an Eulerian dispersed phase model was integrated in the FLUENT code using subroutines and user-defined scalar equations. Numerical predictions obtained from the two different approaches for two-phase flow in a sudden expansion were compared with the measured data. Excellent agreement was attained between the predicted and observed fluid and particle velocity in the axial direction and for the kinetic energy. Erosion profiles in a sudden expansion computed using the Lagrangian scheme yielded good qualitative agreement with measured data and predicted a maximum impact angle of 29 deg at the fluid reattachment point. The Eulerian model was adversely affected by the reattachment of the fluid phase to the wall and the simulated erosion profiles were not in agreement with the Lagrangian or measured data. Furthermore, the Eulerian model under-predicted the Lagrangian impact angle at all locations except the reattachment point. © 2010 American Society of Mechanical Engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term fatigue loads on the Oyster Oscillating Wave Surge Converter (OWSC) is used to describe hydrostatic loads due to water surface elevation with quasi-static changes of state. Therefore a procedure to implement hydrostatic pressure distributions into finite element analysis of the structure is desired. Currently available experimental methods enable one to measure time variant water surface elevation at discrete locations either on or around the body of the scale model during tank tests. This paper discusses the development of a finite element analysis procedure to implement time variant, spatially distributed hydrostatic pressure derived from discretely measured water surface elevation. The developed method can process differently resolved (temporal and spatial) input data and approximate the elevation over the flap faces with user defined properties. The structural loads, namely the forces and moments on the body can then be investigated by post processing the numerical results. This method offers the possibility to process surface elevation or hydrostatic pressure data from computational fluid dynamics simulations and can thus be seen as a first step to a fluid-structure interaction model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Here is detailed a novel and low-cost experimental method for high-throughput automated fluid sample irradiation. The sample is delivered via syringe pump to a nozzle, where it is expressed in the form of a hanging droplet into the path of a beam of ionising radiation. The dose delivery is controlled by an upstream lead shutter, which allows the beam to reach the droplet for a user defined period of time. The droplet is then further expressed after irradiation until it falls into one well of a standard microplate. The entire system is automated and can be operated remotely using software designed in-house, allowing for use in environments deemed unsafe for the user (synchrotron beamlines, for example). Depending on the number of wells in the microplate, several droplets can be irradiated before any human interaction is necessary, and the user may choose up to 10 samples per microplate using an array of identical syringe pumps, the design of which is described here. The nozzles consistently produce droplets of 25.1 ± 0.5 μl.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The technique of externally bonding fiber-reinforced polymer (FRP) composites has become very popular worldwide for retrofitting existing reinforced concrete (RC) structures. Debonding of FRP from the concrete substrate is a typical failure mode in such strengthened structures. The bond behavior between FRP and concrete thus plays a crucial role in these structures. The FRP-to-concrete bond behavior has been extensively investigated experimentally, commonly using a single or double shear test of the FRP-to-concrete bonded joint. Comparatively, much less research has been concerned with numerical simulation, chiefly due to difficulties in the accurate modeling of the complex behavior of concrete. This paper presents a simple but robust finite-element (FE) model for simulating the bond behavior in the entire debonding process for the single shear test. A concrete damage plasticity model is proposed to capture the concrete-to-FRP bond behavior. Numerical results are in close agreement with test data, validating the model. In addition to accuracy, the model has two further advantages: it only requires the basic material parameters (i.e., no arbitrary user-defined parameter such as the shear retention factor is required) and it can be directly implemented in the FE software ABAQUS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.

In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks. The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Abundant evidence shows that regular physical activity (PA) is an effective strategy for preventing obesity in people of diverse socioeconomic status (SES) and racial groups. The proportion of PA performed in parks and how this differs by proximate neighborhood SES has not been thoroughly investigated. The present project analyzes online public web data feeds to assess differences in outdoor PA by neighborhood SES in St. Louis, MO, USA.
Methods: First, running and walking routes submitted by users of the website MapMyRun.com were downloaded. The website enables participants to plan, map, record, and share their exercise routes and outdoor activities like runs, walks, and hikes in an online database. Next, the routes were visually illustrated using geographic information systems. Thereafter, using park data and 2010 Missouri census poverty data, the odds of running and walking routes traversing a low-SES neighborhood, and traversing a park in a low-SES neighborhood were examined in comparison to the odds of routes traversing higher-SES neighborhoods and higher-SES parks.
Results: Results show that a majority of running and walking routes occur in or at least traverse through a park. However, this finding does not hold when comparing low-SES neighborhoods to higher-SES neighborhoods in St. Louis. The odds of running in a park in a low-SES neighborhood were 54% lower than running in a park in a higher-SES neighborhood (OR = 0.46, CI = 0.17-1.23). The odds of walking in a park in a low-SES neighborhood were 17% lower than walking in a park in a higher-SES neighborhood (OR = 0.83, CI = 0.26-2.61).
Conclusion: The novel methods of this study include the use of inexpensive, unobtrusive, and publicly available web data feeds to examine PA in parks and differences by neighborhood SES. Emerging technologies like MapMyRun.com present significant advantages to enhance tracking of user-defined PA across large geographic and temporal settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A domótica é uma área com grande interesse e margem de exploração, que pretende alcançar a gestão automática e autónoma de recursos habitacionais, proporcionando um maior conforto aos utilizadores. Para além disso, cada vez mais se procuram incluir benefícios económicos e ambientais neste conceito, por forma a garantir um futuro sustentável. O aquecimento de água (por meios elétricos) é um dos fatores que mais contribui para o consumo de energia total de uma residência. Neste enquadramento surge o tema “algoritmos inteligentes de baixa complexidade”, com origem numa parceria entre o Departamento de Eletrónica, Telecomunicações e Informática (DETI) da Universidade de Aveiro e a Bosch Termotecnologia SA, que visa o desenvolvimento de algoritmos ditos “inteligentes”, isto é, com alguma capacidade de aprendizagem e funcionamento autónomo. Os algoritmos devem ser adaptados a unidades de processamento de 8 bits para equipar pequenos aparelhos domésticos, mais propriamente tanques de aquecimento elétrico de água. Uma porção do desafio está, por isso, relacionada com as restrições computacionais de microcontroladores de 8 bits. No caso específico deste trabalho, foi determinada a existência de sensores de temperatura da água no tanque como a única fonte de informação externa aos algoritmos, juntamente com parâmetros pré-definidos pelo utilizador que estabelecem os limiares de temperatura máxima e mínima da água. Partindo deste princípio, os algoritmos desenvolvidos baseiam-se no perfil de consumo de água quente, observado ao longo de cada semana, para tentar prever futuras tiragens de água e, consequentemente, agir de forma adequada, adiantando ou adiando o aquecimento da água do tanque. O objetivo é alcançar uma gestão vantajosa entre a economia de energia e o conforto do utilizador (água quente), isto sem que exista necessidade de intervenção direta por parte do utilizador final. A solução prevista inclui também o desenvolvimento de um simulador que permite observar, avaliar e comparar o desempenho dos algoritmos desenvolvidos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increasing complexity of current networks, it became evident the need for Self-Organizing Networks (SON), which aims to automate most of the associated radio planning and optimization tasks. Within SON, this paper aims to optimize the Neighbour Cell List (NCL) for Long Term Evolution (LTE) evolved NodeBs (eNBs). An algorithm composed by three decisions were were developed: distance-based, Radio Frequency (RF) measurement-based and Handover (HO) stats-based. The distance-based decision, proposes a new NCL taking account the eNB location and interference tiers, based in the quadrants method. The last two algorithms consider signal strength measurements and HO statistics, respectively; they also define a ranking to each eNB and neighbour relation addition/removal based on user defined constraints. The algorithms were developed and implemented over an already existent radio network optimization professional tool. Several case studies were produced using real data from a Portuguese LTE mobile operator. © 2014 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atualmente os sistemas Automatic Vehicle Location (AVL) fazem parte do dia-a-dia de muitas empresas. Esta tecnologia tem evoluído significativamente ao longo da última década, tornando-se mais acessível e fácil de utilizar. Este trabalho consiste no desenvolvimento de um sistema de localização de veículos para smartphone Android. Para tal, foram desenvolvidas duas aplicações: uma aplicação de localização para smarphone Android e uma aplicação WEB de monitorização. A aplicação de localização permite a recolha de dados de localização GPS e estabelecer uma rede piconet Bluetooth, admitindo assim a comunicação simultânea com a unidade de controlo de um veículo (ECU) através de um adaptador OBDII/Bluetooth e com até sete sensores/dispositivos Bluetooth que podem ser instalados no veículo. Os dados recolhidos pela aplicação Android são enviados periodicamente (intervalo de tempo definido pelo utilizador) para um servidor Web No que diz respeito à aplicação WEB desenvolvida, esta permite a um gestor de frota efetuar a monitorização dos veículos em circulação/registados no sistema, podendo visualizar a posição geográfica dos mesmos num mapa interativo (Google Maps), dados do veículo (OBDII) e sensores/dispositivos Bluetooth para cada localização enviada pela aplicação Android. O sistema desenvolvido funciona tal como esperado. A aplicação Android foi testada inúmeras vezes e a diferentes velocidades do veículo, podendo inclusive funcionar em dois modos distintos: data logger e data pusher, consoante o estado da ligação à Internet do smartphone. Os sistemas de localização baseados em smartphone possuem vantagens relativamente aos sistemas convencionais, nomeadamente a portabilidade, facilidade de instalação e baixo custo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thy-1 is an abundant neuronal glycoprotein of poorly defined function. We recently provided evidence indicating that Thy-1 clusters a beta3-containing integrin in astrocytes to induce tyrosine phosphorylation, RhoA activation and the formation of focal adhesions and stress fibers. To date, the alpha subunit partner of beta3 integrin in DI TNC1 astrocytes is unknown. Similarly, the ability of neuronal, membrane-bound Thy-1 to trigger astrocyte signaling via integrin engagement remains speculation. Here, evidence that alphav forms an alphavbeta3 heterodimer in DI TNC1 astrocytes was obtained. In neuron-astrocyte association assays, the presence of either anti-alphav or anti-beta3 integrin antibodies reduced cell-cell interaction demonstrating the requirement of both integrin subunits for this association. Moreover, anti-Thy-1 antibodies blocked stimulation of astrocytes by neurons but not the binding of these two cell types. Thus, neuron-astrocyte association involved binding between molecular components in addition to the Thy-1-integrin; however, the signaling events leading to focal adhesion formation in astrocytes depended exclusively on the latter interaction. Additionally, wild-type (RLD) but not mutated (RLE) Thy-1 was shown to directly interact with alphavbeta3 integrin by Surface Plasmon Resonance analysis. This interaction was promoted by divalent cations and was species-independent. Together, these results demonstrate that the alphavbeta3 integrin heterodimer interacts directly with Thy-1 present on neuronal cells to stimulate astrocytes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.