942 resultados para acceleration purpose


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a second-order variational problem depending on the covariant acceleration, which is related to the notion of Riemannian cubic polynomials. This problem and the corresponding optimal control problem are described in the context of higher order tangent bundles using geometric tools. The main tool, a presymplectic variant of Pontryagin’s maximum principle, allows us to study the dynamics of the control problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of LISA Path finder (LPF) mission is to estimate the acceleration noise models of the overall LISA Technology Package (LTP) experiment on-board. This will be of crucial importance for the future space-based Gravitational-Wave (GW) detectors, like eLISA. Here, we present the Bayesian analysis framework to process the planned system identification experiments designed for that purpose. In particular, we focus on the analysis strategies to predict the accuracy of the parameters that describe the system in all degrees of freedom. The data sets were generated during the latest operational simulations organised by the data analysis team and this work is part of the LTPDA Matlab toolbox.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The report of the proceedings of the New Delhi workshop on the SSF Guidelines (Voluntary Guidelines for Securing Sustainable Small-scale Fisheries in the Context of Food Security and Poverty Eradication). The workshop brought together 95 participants from 13 states representing civil society organizations. governments, FAO, and fishworker organizations from both the marine and inland fisheries sectors. This report will be found useful for fishworker organizations, researchers, policy makers, members of civil society and anyone interested in small-scale fisheries, tenure rights, social development, livelihoods, post harvest and trade and disasters and climate change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of the ‘commission-accession’ principle as a mechanism for sustainable collecting in public museums and galleries has been significantly under-researched, only recently soliciting attention from national funding bodies in the United Kingdom (UK). This research has assessed an unfolding situation and provided a body of current evaluative evidence for commission-based acquisitions and a model for curators to use in future contemporary art purchases. ‘Commission-accession’ is a practice increasingly used by European and American museums yet has seen little uptake in the UK. Very recent examples demonstrate that new works produced via commissioning which then enter permanent collections, have significant financial and audience benefits that UK museums could harness, by drawing on the expertise of local and national commissioning organisations. Very little evaluative information is available on inter-institutional precedents in the United States (US) or ‘achat par commande’ in France. Neither is there yet literature that investigates the ambition for and viability of such models in the UK. This thesis addresses both of these areas, and provides evaluative case studies that will be of particular value to curators who seek sustainable ways to build their contemporary art collections. It draws on a survey of 82 museums and galleries across the UK conducted for this research, which provide a picture of where and how ‘commission-accession’ has been applied, and demonstrates its impacts as a strategy. In addition interviews with artists and curators in the UK, US and France on the social, economic and cultural implications of ‘commission-accession’ processes were undertaken. These have shed new light on issues inherent to the commissioning of contemporary art such as communication, trust, and risk as well as drawing attention to the benefits and challenges involved in commissioning as of yet unmade works of art.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Call Level Interfaces (CLI) play a key role in business tiers of relational and on some NoSQL database applications whenever a fine tune control between application tiers and the host databases is a key requirement. Unfortunately, in spite of this significant advantage, CLI are low level API, this way not addressing high level architectural requirements. Among the examples we emphasize two situations: a) the need to decouple or not to decouple the development process of business tiers from the development process of application tiers and b) the need to automatically adapt business tiers to new business and/or security needs at runtime. To tackle these CLI drawbacks, and simultaneously keep their advantages, this paper proposes an architecture relying on CLI from which multi-purpose business tiers components are built, herein referred to as Adaptable Business Tier Components (ABTC). Beyond the reference architecture, this paper presents a proof of concept based on Java and Java Database Connectivity (an example of CLI).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Call Level Interfaces (CLI) are low level API that play a key role in database applications whenever a fine tune control between application tiers and the host databases is a key requirement. Unfortunately, in spite of this significant advantage, CLI were not designed to address organizational requirements and contextual runtime requirements. Among the examples we emphasize the need to decouple or not to decouple the development process of business tiers from the development process of application tiers and also the need to automatically adapt to new business and/or security needs at runtime. To tackle these CLI drawbacks, and simultaneously keep their advantages, this paper proposes an architecture relying on CLI from which multi-purpose business tiers components are built, herein referred to as Adaptable Business Tier Components (ABTC). This paper presents the reference architecture for those components and a proof of concept based on Java and Java Database Connectivity (an example of CLI).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. To investigate the influence of diadenosine polyphosphates on the rate of corneal epithelial cell migration. Methods. Primary corneal epithelial cell cultures were obtained from New Zealand White rabbits. Immunocytochemical experiments were performed by fixing the cells with 4% paraformaldehyde (PFA) and incubated with cytokeratin 3 primary antibody, which was subsequently incubated with a secondary IgG mouse labeled with FITC, and the cells were observed under confocal microscopy. Migration studies were performed by taking confluent monolayers that were wounded with a pipette tip and challenged with different di- and mononucleotides with or without P2 antagonist (n = 8 each treatment). For concentration–response analysis, compounds were tested in doses ranging from 10−8 to 10−3 M (n = 8). The stability of the dinucleotides was assayed by HPLC, with an isocratic method (n = 4). Results. Cells under study were verified as corneal epithelial cells via the immunocytochemical analysis. Cell migration experiments showed that Ap4A, UTP, and ATP accelerated the rate of healing (5, 2.75, and 3 hours, respectively; P < 0.05; P < 0.001), whereas Ap3A, Ap5A, and UDP delayed it (6.5, 10, and 2 hours, respectively; P < 0.05). ADP did not modify the rate of migration. Antagonists demonstrated that Ap4A and Ap3A did activate different P2Y receptors mediating corneal wound-healing acceleration and delay. Concerning the possible degradation of the dinucleotides, it was almost impossible to detect any products resulting from their cleavage. Conclusions. Based on the pharmacological profile of all the compounds tested, the two main P2Y receptors that exist in these corneal cells are a P2Y2 receptor accelerating the rate of healing and a P2Y6 receptor that delays this process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: The goal of the present study was to use a three-dimensional (3D) gradient echo volume in combination with a fat-selective excitation as a 3D motion navigator (3D FatNav) for retrospective correction of microscopic head motion during high-resolution 3D structural scans of extended duration. The fat excitation leads to a 3D image that is itself sparse, allowing high parallel imaging acceleration factors - with the additional advantage of a minimal disturbance of the water signal used for the host sequence. METHODS: A 3D FatNav was inserted into two structural protocols: an inversion-prepared gradient echo at 0.33 × 0.33 × 1.00 mm resolution and a turbo spin echo at 600 μm isotropic resolution. RESULTS: Motion estimation was possible with high precision, allowing retrospective motion correction to yield clear improvements in image quality, especially in the conspicuity of very small blood vessels. CONCLUSION: The highly accelerated 3D FatNav allowed motion correction with noticeable improvements in image quality, even for head motion which was small compared with the voxel dimensions of the host sequence. Magn Reson Med 75:1030-1039, 2016. © 2015 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that a wide-angle converging wave may be transformed into a shape-preserving accelerating beam having a beam-width near the diffraction limit. For that purpose, we followed a strategy that is particularly conceived for the acceleration of nonparaxial laser beams, in contrast to the well-known method by Siviloglou et al (2007 Phys. Rev. Lett. 99 213901). The concept of optical near-field shaping is applied to the design of non-flat ultra-narrow diffractive optical elements. The engineered curvilinear caustic can be set up by the beam emerging from a dynamic assembly of elementary gratings, the latter enabling to modify the effective refractive index of the metamaterial as it is arranged in controlled orientations. This light shaping process, besides being of theoretical interest, is expected to open up a wide range of broadband application possibilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fine-scale differences in behaviour and habitat use have important ecological implications, but have rarely been examined in marine gastropods. We used tri-axial accelerometer loggers to estimate activity levels and movement patterns of the juvenile queen conch Lobatus gigas (n = 11) in 2 habitat types in Eleuthera, The Bahamas. In 2 manipulations in nearshore areas, queen conchs were equipped with accelerometers and released in adjacent coral rubble or seagrass habitats. Queen conchs were located approximately every 6 h during daylight by snorkeling, to measure individual differences in linear distance moved, and after 24 h they were relocated to an alternate habitat (24 h in each habitat). We found significant inter-individual variability in activity levels, but more consistent levels of activity between the 2 habitat types within individual queen conchs. Four (36%) of the individuals placed in seagrass moved back to the adjacent coral rubble habitat, suggesting selectivity for coral rubble. Individuals showed variable behavioural responses when relocated to the less preferable seagrass habitat, which may be related to differing stress-coping styles. Our results suggest that behavioural variability between individuals may be an important factor driving movement and habitat use in queen conch and, potentially, their susceptibility to human stressors. This study provides evidence of diverse behavioural (activity) patterns and habitat selectivity in a marine gastropod and highlights the utility of accelero meter biologgers for continuously monitoring animal behaviour in the wild.