958 resultados para Non-Local Model
Resumo:
Nonparametric belief propagation (NBP) is a well-known particle-based method for distributed inference in wireless networks. NBP has a large number of applications, including cooperative localization. However, in loopy networks NBP suffers from similar problems as standard BP, such as over-confident beliefs and possible nonconvergence. Tree-reweighted NBP (TRW-NBP) can mitigate these problems, but does not easily lead to a distributed implementation due to the non-local nature of the required so-called edge appearance probabilities. In this paper, we propose a variation of TRWNBP, suitable for cooperative localization in wireless networks. Our algorithm uses a fixed edge appearance probability for every edge, and can outperform standard NBP in dense wireless networks.
Resumo:
There is now an emerging need for an efficient modeling strategy to develop a new generation of monitoring systems. One method of approaching the modeling of complex processes is to obtain a global model. It should be able to capture the basic or general behavior of the system, by means of a linear or quadratic regression, and then superimpose a local model on it that can capture the localized nonlinearities of the system. In this paper, a novel method based on a hybrid incremental modeling approach is designed and applied for tool wear detection in turning processes. It involves a two-step iterative process that combines a global model with a local model to take advantage of their underlying, complementary capacities. Thus, the first step constructs a global model using a least squares regression. A local model using the fuzzy k-nearest-neighbors smoothing algorithm is obtained in the second step. A comparative study then demonstrates that the hybrid incremental model provides better error-based performance indices for detecting tool wear than a transductive neurofuzzy model and an inductive neurofuzzy model.
Resumo:
In this work the spectrally resolved, multigroup and mean radiative opacities of carbon plasmas are calculated for a wide range of plasma conditions which cover situations where corona, local thermodynamic and non-local thermodynamic equilibrium regimes are found. An analysis of the influence of the thermodynamic regime on these magnitudes is also carried out by means of comparisons of the results obtained from collisional-radiative, corona or Saha–Boltzmann equations. All the calculations presented in this work were performed using ABAKO/RAPCAL code.
Resumo:
In this paper we present a global overview of the recent study carried out in Spain for the new hazard map, which final goal is the revision of the Building Code in our country (NCSE-02). The study was carried our for a working group joining experts from The Instituto Geografico Nacional (IGN) and the Technical University of Madrid (UPM) , being the different phases of the work supervised by an expert Committee integrated by national experts from public institutions involved in subject of seismic hazard. The PSHA method (Probabilistic Seismic Hazard Assessment) has been followed, quantifying the epistemic uncertainties through a logic tree and the aleatory ones linked to variability of parameters by means of probability density functions and Monte Carlo simulations. In a first phase, the inputs have been prepared, which essentially are: 1) a project catalogue update and homogenization at Mw 2) proposal of zoning models and source characterization 3) calibration of Ground Motion Prediction Equations (GMPE’s) with actual data and development of a local model with data collected in Spain for Mw < 5.5. In a second phase, a sensitivity analysis of the different input options on hazard results has been carried out in order to have criteria for defining the branches of the logic tree and their weights. Finally, the hazard estimation was done with the logic tree shown in figure 1, including nodes for quantifying uncertainties corresponding to: 1) method for estimation of hazard (zoning and zoneless); 2) zoning models, 3) GMPE combinations used and 4) regression method for estimation of source parameters. In addition, the aleatory uncertainties corresponding to the magnitude of the events, recurrence parameters and maximum magnitude for each zone have been also considered including probability density functions and Monte Carlo simulations The main conclusions of the study are presented here, together with the obtained results in terms of PGA and other spectral accelerations SA (T) for return periods of 475, 975 and 2475 years. The map of the coefficient of variation (COV) are also represented to give an idea of the zones where the dispersion among results are the highest and the zones where the results are robust.
Resumo:
Stream-mining approach is defined as a set of cutting-edge techniques designed to process streams of data in real time, in order to extract knowledge. In the particular case of classification, stream-mining has to adapt its behaviour to the volatile underlying data distributions, what has been called concept drift. Moreover, it is important to note that concept drift may lead to situations where predictive models become invalid and have therefore to be updated to represent the actual concepts that data poses. In this context, there is a specific type of concept drift, known as recurrent concept drift, where the concepts represented by data have already appeared in the past. In those cases the learning process could be saved or at least minimized by applying a previously trained model. This could be extremely useful in ubiquitous environments that are characterized by the existence of resource constrained devices. To deal with the aforementioned scenario, meta-models can be used in the process of enhancing the drift detection mechanisms used by data stream algorithms, by representing and predicting when the change will occur. There are some real-world situations where a concept reappears, as in the case of intrusion detection systems (IDS), where the same incidents or an adaptation of them usually reappear over time. In these environments the early prediction of drift by means of a better knowledge of past models can help to anticipate to the change, thus improving efficiency of the model regarding the training instances needed. By means of using meta-models as a recurrent drift detection mechanism, the ability to share concepts representations among different data mining processes is open. That kind of exchanges could improve the accuracy of the resultant local model as such model may benefit from patterns similar to the local concept that were observed in other scenarios, but not yet locally. This would also improve the efficiency of training instances used during the classification process, as long as the exchange of models would aid in the application of already trained recurrent models, that have been previously seen by any of the collaborative devices. Which it is to say that the scope of recurrence detection and representation is broaden. In fact the detection, representation and exchange of concept drift patterns would be extremely useful for the law enforcement activities fighting against cyber crime. Being the information exchange one of the main pillars of cooperation, national units would benefit from the experience and knowledge gained by third parties. Moreover, in the specific scope of critical infrastructures protection it is crucial to count with information exchange mechanisms, both from a strategical and technical scope. The exchange of concept drift detection schemes in cyber security environments would aid in the process of preventing, detecting and effectively responding to threads in cyber space. Furthermore, as a complement of meta-models, a mechanism to assess the similarity between classification models is also needed when dealing with recurrent concepts. In this context, when reusing a previously trained model a rough comparison between concepts is usually made, applying boolean logic. The introduction of fuzzy logic comparisons between models could lead to a better efficient reuse of previously seen concepts, by applying not just equal models, but also similar ones. This work faces the aforementioned open issues by means of: the MMPRec system, that integrates a meta-model mechanism and a fuzzy similarity function; a collaborative environment to share meta-models between different devices; a recurrent drift generator that allows to test the usefulness of recurrent drift systems, as it is the case of MMPRec. Moreover, this thesis presents an experimental validation of the proposed contributions using synthetic and real datasets.
Resumo:
Este trabalho apresenta o controle de posição e orientação de um modelo não linear de Plataforma de Stewart com seis graus de liberdade construído no ambiente de sistemas multicorpos ADAMS® desenvolvido pela Mechanical Dynamics, Inc. O modelo não linear é exportado para o ambiente SIMULINK® desenvolvido pela MathWorks, Inc., onde o controle de posição e orientação é realizado a partir da linearização do modelo e a aplicação de um sistema seguidor com realimentação de estados. Utililiza-se, também o SIMULINK® para implementar a dinâmica de um sistema servoválvula e cilindro hidráulico com um servocontrole de pressão e assim simular o comportamento dinâmico de um simulador de vôo com acionamento hidráulico. A utilização destes pacotes comerciais visa obter uma economia de tempo e esforço na modelagem de sistemas mecânicos complexos e na programação para obtenção da resposta do sistema no tempo, além de facilitar a análise de várias configurações de Plataformas de Stewart
Resumo:
Ab initio quantum transport calculations show that short NiO chains suspended in Ni nanocontacts present a very strong spin-polarization of the conductance.The generalized gradient approximation we use here predicts a similar polarization of the conductance as the one previously computed with non-local exchange, confirming the robustness of the result. Their use as nanoscopic spinvalves is proposed.
Resumo:
Context. There is growing evidence that a treatment of binarity amongst OB stars is essential for a full theory of stellar evolution. However the binary properties of massive stars – frequency, mass ratio & orbital separation – are still poorly constrained. Aims. In order to address this shortcoming we have undertaken a multiepoch spectroscopic study of the stellar population of the young massive cluster Westerlund 1. In this paper we present an investigation into the nature of the dusty Wolf-Rayet star and candidate binary W239. Methods. To accomplish this we have utilised our spectroscopic data in conjunction with multi-year optical and near-IR photometric observations in order to search for binary signatures. Comparison of these data to synthetic non-LTE model atmosphere spectra were used to derive the fundamental properties of the WC9 primary. Results. We found W239 to have an orbital period of only ~5.05 days, making it one of the most compact WC binaries yet identified. Analysis of the long term near-IR lightcurve reveals a significant flare between 2004-6. We interpret this as evidence for a third massive stellar component in the system in a long period (>6 yr), eccentric orbit, with dust production occuring at periastron leading to the flare. The presence of a near-IR excess characteristic of hot (~1300 K) dust at every epoch is consistent with the expectation that the subset of persistent dust forming WC stars are short (<1 yr) period binaries, although confirmation will require further observations. Non-LTE model atmosphere analysis of the spectrum reveals the physical properties of the WC9 component to be fully consistent with other Galactic examples. Conclusions. The simultaneous presence of both short period Wolf-Rayet binaries and cool hypergiants within Wd 1 provides compelling evidence for a bifurcation in the post-Main Sequence evolution of massive stars due to binarity. Short period O+OB binaries will evolve directly to the Wolf-Rayet phase, either due to an episode of binary mediated mass loss – likely via case A mass transfer or a contact configuration – or via chemically homogenous evolution. Conversely, long period binaries and single stars will instead undergo a red loop across the HR diagram via a cool hypergiant phase. Future analysis of the full spectroscopic dataset for Wd 1 will constrain the proportion of massive stars experiencing each pathway; hence quantifying the importance of binarity in massive stellar evolution up to and beyond supernova and the resultant production of relativistic remnants.
Resumo:
Aims. Despite their importance to a number of astrophysical fields, the lifecycles of very massive stars are still poorly defined. In order to address this shortcoming, we present a detailed quantitative study of the physical properties of four early-B hypergiants (BHGs) of spectral type B1-4 Ia+; Cyg OB2 #12, ζ1 Sco, HD 190603 and BP Cru. These are combined with an analysis of their long-term spectroscopic and photometric behaviour in order to determine their evolutionary status. Methods. Quantitative analysis of UV–radio photometric and spectroscopic datasets was undertaken with a non-LTE model atmosphere code in order to derive physical parameters for comparison with apparently closely related objects, such as B supergiants (BSGs) and luminous blue variables (LBVs), and theoretical evolutionary predictions. Results. The long-term photospheric and spectroscopic datasets compiled for the early-B HGs revealed that they are remarkably stable over long periods ( ≥ 40 yrs), with the possible exception of ζ1 Sco prior to the 20th century; in contrast to the typical excursions that characterise LBVs. Quantitative analysis of ζ1 Sco, HD 190603 and BP Cru yielded physical properties intermediate between BSGs and LBVs; we therefore suggest that BHGs are the immediate descendants and progenitors (respectively) of such stars, for initial masses in the range ~30−60 M⊙. Comparison of the properties of ζ1 Sco with the stellar population of its host cluster/association NGC 6231/Sco OB1 provides further support for such an evolutionary scenario. In contrast, while the wind properties of Cyg OB2 #12 are consistent with this hypothesis, the combination of extreme luminosity and spectroscopic mass (~110 M⊙) and comparatively low temperature means it cannot be accommodated in such a scheme. Likewise, despite its co-location with several LBVs above the Humphreys-Davidson (HD) limit, the lack of long term variability and its unevolved chemistry apparently excludes such an identification. Since such massive stars are not expected to evolve to such cool temperatures, instead traversing an O4-6Ia → O4-6Ia+ → WN7-9ha pathway, the properties of Cyg OB2 #12 are therefore difficult to understand under current evolutionary paradigms. Finally, we note that as with AG Car in its cool phase, despite exceeding the HD limit, the properties of Cyg OB2 #12 imply that it lies below the Eddington limit – thus we conclude that the HD limit does not define a region of the HR diagram inherently inimical to the presence of massive stars.
Resumo:
Context. The first soft gamma-ray repeater was discovered over three decades ago, and was subsequently identified as a magnetar, a class of highly magnetised neutron star. It has been hypothesised that these stars power some of the brightest supernovae known, and that they may form the central engines of some long duration gamma-ray bursts. However there is currently no consenus on the formation channel(s) of these objects. Aims. The presence of a magnetar in the starburst cluster Westerlund 1 implies a progenitor with a mass ≥40 M⊙, which favours its formation in a binary that was disrupted at supernova. To test this hypothesis we conducted a search for the putative pre-SN companion. Methods. This was accomplished via a radial velocity survey to identify high-velocity runaways, with subsequent non-LTE model atmosphere analysis of the resultant candidate, Wd1-5. Results. Wd1-5 closely resembles the primaries in the short-period binaries, Wd1-13 and 44, suggesting a similar evolutionary history, although it currently appears single. It is overluminous for its spectroscopic mass and we find evidence of He- and N-enrichement, O-depletion, and critically C-enrichment, a combination of properties that is difficult to explain under single star evolutionary paradigms. We infer a pre-SN history for Wd1-5 which supposes an initial close binary comprising two stars of comparable (~ 41 M⊙ + 35 M⊙) masses. Efficient mass transfer from the initially more massive component leads to the mass-gainer evolving more rapidly, initiating luminous blue variable/common envelope evolution. Reverse, wind-driven mass transfer during its subsequent WC Wolf-Rayet phase leads to the carbon pollution of Wd1-5, before a type Ibc supernova disrupts the binary system. Under the assumption of a physical association between Wd1-5 and J1647-45, the secondary is identified as the magnetar progenitor; its common envelope evolutionary phase prevents spin-down of its core prior to SN and the seed magnetic field for the magnetar forms either in this phase or during the earlier episode of mass transfer in which it was spun-up. Conclusions. Our results suggest that binarity is a key ingredient in the formation of at least a subset of magnetars by preventing spin-down via core-coupling and potentially generating a seed magnetic field. The apparent formation of a magnetar in a Type Ibc supernova is consistent with recent suggestions that superluminous Type Ibc supernovae are powered by the rapid spin-down of these objects.
Resumo:
Excavations were carried out in a Late Palaeolithic site in the community of Bad Buchau-Kappel between 2003 and 2007. Archaeological investigations covered a total of more than 200 m**2. This site is the product of what likely were multiple occupations that occurred during the Late Glacial on the Federsee shore in this location. The site is situated on a mineral ridge that projected into the former Late Glacial lake Federsee. This beach ridge consists of deposits of fine to coarse gravel and sand and was surrounded by open water, except for a connection to the solid shore on the south. A lagoon lay between the hook-shaped ridge and the shore of the Federsee. This exposed location provided optimal access to the water of the lake. In addition, the small lagoon may have served as a natural harbor for landing boats or canoes. Sedimentological and palynological investigations document the dynamic history of the location between 14,500 and 11,600 years before present (cal BP). Evidence of the deposition of sands, gravels and muds since the Bølling Interstadial is provided by stratigraphic and palynological analyses. The major occupation occurred in the second half of the Younger Dryas period. Most of the finds were located on or in the sediments of the ridge; fewer finds occurred in the surrounding mud, which was also deposited during the Younger Dryas. Direct dates on some bone fragments, however, demonstrate that intermittent sporadic occupations also took place during the two millennia of the Meiendorf, Bølling, and Allerød Interstadials. These bones were reworked during the Younger Dryas and redeposited in the mud. A 14C date from one bone of 11,600 years ago (cal BP) places the Late Palaeolithic occupation of the ridge at the very end of the Younger Dryas, which is in agreement with stratigraphic observations. Stone artifacts, numbering 3,281, comprise the majority of finds from the site. These include typical artifacts of the Late Palaeolithic, such as backed points, short scrapers, and small burins. There are no bipointes or Malaurie-Points, which is in accord with the absolute date of the occupation. A majority of the artifacts are made from a brown chert that is obtainable a few kilometers north of the site in sediments of the Graupensandrinne. Other raw materials include red and green radiolarite that occur in the fluvioglacial gravels of Oberschwaben, as well as quartzite and lydite. The only non-local material present is a few artifacts of tabular chert from the region near Kelheim in Bavaria. A unique find consists of two fragments of a double-barbed harpoon made of red deer antler, which was found in the Younger Dryas mud. It is likely, but not certain, that this find belongs to the same assemblage as the numerous stone artifacts. Although not numerous, animal bones were also found in the excavations. Most of them lay in sediments of the Younger Dryas, but several 14C dates place some of these bones in earlier periods, including the Meiendorf, Bølling, and Allerød Interstadials. These bones were reworked by water and redeposited in mud sediments during the Younger Dryas. As a result, it is difficult to attribute individual bones to particular chronological positions without exact dates. Species that could be identified include wild horse (Equus spec.), moose or elk (Alces alces), red deer (Cervus elaphus), roe deer (Capreolus capreolus), aurochs or bison (Bos spec.), wild boar (Sus scrofa), as well as birds and fish, including pike (Esox Lucius).
Resumo:
We report a method using variation in the chloroplast genome (cpDNA) to test whether oak stands of unknown provenance are of native and/or local origin. As an example, a sample of test oaks, of mostly unknown status in relation to nativeness and localness, were surveyed for cpDNA type. The sample comprised 126 selected trees, derived from 16 British seed stands, and 75 trees, selected for their superior phenotype (201 tree samples in total). To establish whether these two test groups are native and local, their cpDNA type was compared with that of material from known autochthonous origin (results of a previous study which examined variation in 1076 trees from 224 populations distributed across Great Britain). In the previous survey of autochthonous material, four cpDNA types were identified as native; thus if a test sample possessed a new haplotype then it could be classed as non-native. Every one of the 201 test samples possessed one of the four cpDNA types found within the autochthonous sample. Therefore none could be proven to be introduced and, on this basis, was considered likely to be native. The previous study of autochthonous material also found that cpDNA variation was highly structured geographically and, therefore, if the cpDNA type of the test sample did not match that of neighbouring autochthonous trees then it could be considered to be non-local. A high proportion of the seed stand group (44.2 per cent) and the phenotypically superior trees (58.7 per cent) possessed a cpDNA haplotype which matched that of the neighbouring autochthonous trees and, therefore, can be considered as local, or at least cannot be proven to be introduced. The remainder of the test sample could be divided into those which did not grow in an area of overall dominance (18.7 per cent of seed stand trees and 28 per cent of phenotypically superior) and those which failed to match the neighbouring autochthonous haplotype (37.1 per cent and 13.3 per cent, respectively). Most of the non-matching test samples were located within 50 km of an area dominated by a matching autochthonous haplotype (96.0 per cent and 93.5 per cent, respectively), and potentially indicates only local transfer. Whilst such genetic fingerprinting tests have proven useful for assessing the origin of stands of unknown provenance, there are potential limitations to using a marker from the chloroplast genome (mostly adaptively neutral) for classifying seed material into categories which have adaptive implications. These limitations are discussed, particularly within the context of selecting adaptively superior material for restocking native forests.
Resumo:
We present a new version of non-local density functional theory (NL-DFT) adapted to description of vapor adsorption isotherms on amorphous materials like non-porous silica. The novel feature of this approach is that it accounts for the roughness of adsorbent surface. The solid–fluid interaction is described in the same framework as in the case of fluid–fluid interactions, using the Weeks–Chandler–Andersen (WCA) scheme and the Carnahan–Starling (CS) equation for attractive and repulsive parts of the Helmholtz free energy, respectively. Application to nitrogen and argon adsorption isotherms on non-porous silica LiChrospher Si-1000 at their boiling points, recently published by Jaroniec and co-workers, has shown an excellent correlative ability of our approach over the complete range of pressures, which suggests that the surface roughness is mostly the reason for the observed behavior of adsorption isotherms. From the analysis of these data, we found that in the case of nitrogen adsorption short-range interactions between oxygen atoms on the silica surface and quadrupole of nitrogen molecules play an important role. The approach presented in this paper may be further used in quantitative analysis of adsorption and desorption isotherms in cylindrical pores such as MCM-41 and carbon nanotubes.
Resumo:
Adsorption of pure nitrogen, argon, acetone, chloroform and acetone-chloroform mixture on graphitized thermal carbon black is considered at sub-critical conditions by means of molecular layer structure theory (MLST). In the present version of the MLST an adsorbed fluid is considered as a sequence of 2D molecular layers, whose Helmholtz free energies are obtained directly from the analysis of experimental adsorption isotherm of pure components. The interaction of the nearest layers is accounted for in the framework of mean field approximation. This approach allows quantitative correlating of experimental nitrogen and argon adsorption isotherm both in the monolayer region and in the range of multi-layer coverage up to 10 molecular layers. In the case of acetone and chloroform the approach also leads to excellent quantitative correlation of adsorption isotherms, while molecular approaches such as the non-local density functional theory (NLDFT) fail to describe those isotherms. We extend our new method to calculate the Helmholtz free energy of an adsorbed mixture using a simple mixing rule, and this allows us to predict mixture adsorption isotherms from pure component adsorption isotherms. The approach, which accounts for the difference in composition in different molecular layers, is tested against the experimental data of acetone-chloroform mixture (non-ideal mixture) adsorption on graphitized thermal carbon black at 50 degrees C. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.