941 resultados para fixed point method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The global increase in the penetration of renewable energy is pushing electrical power systems into uncharted territory, especially in terms of transient and dynamic stability. In particular, the greater penetration of wind generation in European power networks is, at times, displacing a significant capacity of conventional synchronous generation with fixed-speed induction generation and now more commonly, doubly-fed induction generators. The impact of such changes in the generation mix requires careful monitoring to assess the impact on transient and dynamic stability. This paper presents a measurement based method for the early detection of power system oscillations, with attention to mode damping, in order to raise alarms and develop strategies to actively improve power system dynamic stability and security. A method is developed based on wavelet transform and support vector data description (SVDD) to detect oscillation modes in wind farm output power, which may excite dynamic instabilities in the wider system. The wavelet transform is used as a filter to identify oscillations in different frequency bands, while SVDD is used to extract dominant features from different scales and generate an assessment boundary according to the extracted features. Poorly damped oscillations of a large magnitude or that are resonant can be alarmed to the system operator, to reduce the risk of system instability. Method evaluation is exemplified used real data from a chosen wind farm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a method for measuring the full stress tensor in a crystal utilising the properties of individual point defects. By measuring the perturbation to the electronic states of three point defects with C 3 v symmetry in a cubic crystal, sufficient information is obtained to construct all six independent components of the symmetric stress tensor. We demonstrate the method using photoluminescence from nitrogen-vacancy colour centers in diamond. The method breaks the inverse relationship between spatial resolution and sensitivity that is inherent to existing bulk strain measurement techniques, and thus, offers a route to nanoscale strain mapping in diamond and other materials in which individual point defects can be interrogated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports the detailed description and validation of a fully automated, computer controlled analytical method to spatially probe the gas composition and thermal characteristics in packed bed systems. This method has been designed to limit the invasiveness of the probe, a characteristic assessed using CFD. The thermocouple is aligned with the sampling holes to enable simultaneous recording of the gas composition and temperature profiles. This analysis technique has been validated by studying CO oxidation over a 1% Pt/Al2O3 catalyst. The resultant profiles have been compared with a micro-kinetic model, to further assess the strength of the technique. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The process of accounting for heterogeneity has made significant advances in statistical research, primarily in the framework of stochastic analysis and the development of multiple-point statistics (MPS). Among MPS techniques, the direct sampling (DS) method is tested to determine its ability to delineate heterogeneity from aerial magnetics data in a regional sandstone aquifer intruded by low-permeability volcanic dykes in Northern Ireland, UK. The use of two two-dimensional bivariate training images aids in creating spatial probability distributions of heterogeneities of hydrogeological interest, despite relatively ‘noisy’ magnetics data (i.e. including hydrogeologically irrelevant urban noise and regional geologic effects). These distributions are incorporated into a hierarchy system where previously published density function and upscaling methods are applied to derive regional distributions of equivalent hydraulic conductivity tensor K. Several K models, as determined by several stochastic realisations of MPS dyke locations, are computed within groundwater flow models and evaluated by comparing modelled heads with field observations. Results show a significant improvement in model calibration when compared to a simplistic homogeneous and isotropic aquifer model that does not account for the dyke occurrence evidenced by airborne magnetic data. The best model is obtained when normal and reverse polarity dykes are computed separately within MPS simulations and when a probability threshold of 0.7 is applied. The presented stochastic approach also provides improvement when compared to a previously published deterministic anisotropic model based on the unprocessed (i.e. noisy) airborne magnetics. This demonstrates the potential of coupling MPS to airborne geophysical data for regional groundwater modelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The water activity (a(w)) of microbial substrates, biological samples, and foods and drinks is usually determined by direct measurement of the equilibrium relative humidity above a sample. However, these materials can contain ethanol, which disrupts the operation of humidity sensors. Previously, an indirect and problematic technique based on freezing-point depression measurements was needed to calculate the a(w) when ethanol was present. We now describe a rapid and accurate method to determine the a(w) of ethanol-containing samples at ambient temperatures. Disruption of sensor measurements was minimized by using a newly developed, alcohol-resistant humidity sensor fitted with an alcohol filter. Linear equations were derived from a(w) measurements of standard ethanol-water mixtures, and from Norrish's equation, to correct sensor measurements. To our knowledge, this is the first time that electronic sensors have been used to determine the a(w) of ethanol- containing samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Re-imagining of the aerial transportation system has become increasingly important as the need for significant environmental and economic efficiency gains has become ever more prevalent. A number of studies have highlighted the benefits of the adoption of air to air refuelling within civil aviation. However, it also opens up the potential for increased flexibility in operations through smaller aircraft, shifting emphasis away from the traditional hub and spoke method of operation towards the more flexible Point to Point operations. It is proposed here that one technology can act as an enabler for the other, realising benefits that neither can realise as a standalone. The impact of an air-toair refuelling enabled point to point system is discussed, and the affect on economic and environmental cost metrics relative to traditional operations evaluated. An idealised airport configuration study shows the difference in fuel burn for point to point networks to vary from -23% to 28% from that of Hub and Spoke depending on the configuration. The sensitive natures of the concepts are further explored in a second study based on real airport configurations. The complex effect of the choice of a Point to Point or Hub and Spoke system on fuel burn, operating cost and revenue potential is highlighted. Fuel burn savings of 15% can be experienced with AAR over traditional refuelling operations, with point to point networks increasing the available seat miles (by approximately 20%) without a proportional increase in operating cost or fuel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An SVD processor system is presented in which each processing element is implemented using a simple CORDIC unit. The internal recursive loop within the CORDIC module is exploited, with pipelining being used to multiplex the two independent micro-rotations onto a single CORDIC processor. This leads to a high performance and efficient hardware architecture. In addition, a novel method for scale factor correction is presented which only need be applied once at the end of the computation. This also reduces the computation time. The net result is an SVD architecture based on a conventional CORDIC approach, which combines high performance with high silicon area efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Magellanic Clouds are uniquely placed to study the stellar contribution to dust emission. Individual stars can be resolved in these systems even in the mid-infrared, and they are close enough to allow detection of infrared excess caused by dust. We have searched the Spitzer Space Telescope data archive for all Infrared Spectrograph (IRS) staring-mode observations of the Small Magellanic Cloud (SMC) and found that 209 Infrared Array Camera (IRAC) point sources within the footprint of the Surveying the Agents of Galaxy Evolution in the Small Magellanic Cloud (SAGE-SMC) Spitzer Legacy programme were targeted, within a total of 311 staring-mode observations. We classify these point sources using a decision tree method of object classification, based on infrared spectral features, continuum and spectral energy distribution shape, bolometric luminosity, cluster membership and variability information. We find 58 asymptotic giant branch (AGB) stars, 51 young stellar objects, 4 post-AGB objects, 22 red supergiants, 27 stars (of which 23 are dusty OB stars), 24 planetary nebulae (PNe), 10 Wolf-Rayet stars, 3 H II regions, 3 R Coronae Borealis stars, 1 Blue Supergiant and 6 other objects, including 2 foreground AGB stars. We use these classifications to evaluate the success of photometric classification methods reported in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores the theme of exhibiting architectural research through a particular example, the development of the Irish pavilion for the 14th architectural biennale, Venice 2014. Responding to Rem Koolhaas’s call to investigate the international absorption of modernity, the Irish pavilion became a research project that engaged with the development of the architectures of infrastructure in Ireland in the twentieth and twenty-first centuries. Central to this proposition was that infrastructure is simultaneously a technological and cultural construct, one that for Ireland occupied a critical position in the building of a new, independent post-colonial nation state, after 1921.

Presupposing infrastructure as consisting of both visible and invisible networks, the idea of a matrix become a central conceptual and visual tool in the curatorial and design process for the exhibition and pavilion. To begin with this was a two-dimensional grid used to identify and order what became described as a series of ten ‘infrastructural episodes’. These were determined chronologically across the decades between 1914 and 2014 and their spatial manifestations articulated in terms of scale: micro, meso and macro. At this point ten academics were approached as researchers. Their purpose was twofold, to establish the broader narratives around which the infrastructures developed and to scrutinise relevant archives for compelling visual material. Defining the meso scale as that of the building, the media unearthed was further filtered and edited according to a range of categories – filmic/image, territory, building detail, and model – which sought to communicate the relationship between the pieces of architecture and the larger systems to which they connect. New drawings realised by the design team further iterated these relationships, filling in gaps in the narrative by providing composite, strategic or detailed drawings.

Conceived as an open-ended and extendable matrix, the pavilion was influenced by a series of academic writings, curatorial practices, artworks and other installations including: Frederick Kiesler’s City of Space (1925), Eduardo Persico and Marcello Nizzoli’s Medaglio d’Oro room (1934), Sol Le Witt’s Incomplete Open Cubes (1974) and Rosalind Krauss’s seminal text ‘Grids’ (1979). A modular frame whose structural bays would each hold and present an ‘episode’, the pavilion became both a visual analogue of the unseen networks embodying infrastructural systems and a reflection on the predominance of framed structures within the buildings exhibited. Sharing the aspiration of adaptability of many of these schemes, its white-painted timber components are connected by easily-dismantled steel fixings. These and its modularity allow the structure to be both taken down and re-erected subsequently in different iterations. The pavilion itself is, therefore, imagined as essentially provisional and – as with infrastructure – as having no fixed form. Presenting archives and other material over time, the transparent nature of the space allowed these to overlap visually conveying the nested nature of infrastructural production. Pursuing a means to evoke the qualities of infrastructural space while conveying a historical narrative, the exhibition’s termination in the present is designed to provoke in the visitor, a perceptual extension of the matrix to engage with the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we introduce a novel approach to face recognition which simultaneously tackles three combined challenges: 1) uneven illumination; 2) partial occlusion; and 3) limited training data. The new approach performs lighting normalization, occlusion de-emphasis and finally face recognition, based on finding the largest matching area (LMA) at each point on the face, as opposed to traditional fixed-size local area-based approaches. Robustness is achieved with novel approaches for feature extraction, LMA-based face image comparison and unseen data modeling. On the extended YaleB and AR face databases for face identification, our method using only a single training image per person, outperforms other methods using a single training image, and matches or exceeds methods which require multiple training images. On the labeled faces in the wild face verification database, our method outperforms comparable unsupervised methods. We also show that the new method performs competitively even when the training images are corrupted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a method for analysing the operational complexity in supply chains by using an entropic measure based on information theory. The proposed approach estimates the operational complexity at each stage of the supply chain and analyses the changes between stages. In this paper a stage is identified by the exchange of data and/or material. Through analysis the method identifies the stages where the operational complexity is both generated and propagated (exported, imported, generated or absorbed). Central to the method is the identification of a reference point within the supply chain. This is where the operational complexity is at a local minimum along the data transfer stages. Such a point can be thought of as a ‘sink’ for turbulence generated in the supply chain. Where it exists, it has the merit of stabilising the supply chain by attenuating uncertainty. However, the location of the reference point is also a matter of choice. If the preferred location is other than the current one, this is a trigger for management action. The analysis can help decide appropriate remedial action. More generally, the approach can assist logistics management by highlighting problem areas. An industrial application is presented to demonstrate the applicability of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to present a contrastive approach between three different ways of building concepts after proving the similar syntactic possibilities that coexist in terms. However, from the semantic point of view we can see that each language family has a different distribution in meaning. But the most important point we try to show is that the differences found in the psychological process when communicating concepts should guide the translator and the terminologist in the target text production and the terminology planning process. Differences between languages in the information transmission process are due to the different roles the different types of knowledge play. We distinguish here the analytic-descriptive knowledge and the analogical knowledge among others. We also state that none of them is the best when determining the correctness of a term, but there has to be adequacy criteria in the selection process. This concept building or term building success is important when looking at the linguistic map of the information society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Master’s Double Degree in Finance from Maastricht University and NOVA – School of Business and Economics