948 resultados para Fault location algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Models are presented for the optimal location of hubs in airline networks, that take into consideration the congestion effects. Hubs, which are the most congested airports, are modeled as M/D/c queuing systems, that is, Poisson arrivals, deterministic service time, and {\em c} servers. A formula is derived for the probability of a number of customers in the system, which is later used to propose a probabilistic constraint. This constraint limits the probability of {\em b} airplanes in queue, to be lesser than a value $\alpha$. Due to the computational complexity of the formulation. The model is solved using a meta-heuristic based on tabu search. Computational experience is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we address the issue of locating hierarchical facilities in the presence of congestion. Two hierarchical models are presented, where lower level servers attend requests first, and then, some of the served customers are referred to higher level servers. In the first model, the objective is to find the minimum number of servers and theirlocations that will cover a given region with a distance or time standard. The second model is cast as a Maximal Covering Location formulation. A heuristic procedure is then presented together with computational experience. Finally, some extensions of these models that address other types of spatial configurations are offered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a model and solution methods, for locating a fixed number ofmultiple-server, congestible common service centers or congestible publicfacilities. Locations are chosen so to minimize consumers congestion (orqueuing) and travel costs, considering that all the demand must be served.Customers choose the facilities to which they travel in order to receiveservice at minimum travel and congestion cost. As a proxy for thiscriterion, total travel and waiting costs are minimized. The travel costis a general function of the origin and destination of the demand, whilethe congestion cost is a general function of the number of customers inqueue at the facilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PRECON S.A is a manufacturing company dedicated to produce prefabricatedconcrete parts to several industries as rail transportation andagricultural industries.Recently, PRECON signed a contract with RENFE,the Spanish Nnational Rail Transportation Company to manufacturepre-stressed concrete sleepers for siding of the new railways of the highspeed train AVE. The scheduling problem associated with the manufacturingprocess of the sleepers is very complex since it involves severalconstraints and objectives. The constraints are related with productioncapacity, the quantity of available moulds, satisfying demand and otheroperational constraints. The two main objectives are related withmaximizing the usage of the manufacturing resources and minimizing themoulds movements. We developed a deterministic crowding genetic algorithmfor this multiobjective problem. The algorithm has proved to be a powerfuland flexible tool to solve the large-scale instance of this complex realscheduling problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The results of the application of the geophysical electromagnetic prospection methods in the resolution of the problems of the spatial location of the travertine quaternary formations of the Banyoles depression are presented

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

P130 A HIGH-RESOLUTION 2D/3D SEISMIC STUDY OF A THRUST FAULT ZONE IN LAKE GENEVA SWITZERLAND M. SCHEIDHAUER M. BERES D. DUPUY and F. MARILLIER Institute of Geophysics University of Lausanne 1015 Lausanne, Switzerland Summary A high-resolution three-dimensional (3D) seismic reflection survey has been conducted in Lake Geneva near the city of Lausanne Switzerland where the faulted molasse basement (Tertiary sandstones) is overlain by complex Quaternary sedimentary structures. Using a single 48-channel streamer an area of 1200 m x 600 m was surveyed in 10 days. With a 5-m shot spacing and a receiver spacing of 2.5 m in the inline direction and 7.5 m in the crossline direction, a 12-fold data coverage was achieved. A maximum penetration depth of ~150 m was achieved with a 15 cu. in. water gun operated at 140 bars. The multi-channel data allow the determination of an accurate velocity field for 3D processing, and they show particularly clean images of the fault zone and the overlying sediments in horizontal and vertical sections. In order to compare different sources, inline 55 was repeated with a 30/30 and a 15/15 cu. in. double-chamber air gun (Mini GI) operated at 100 and 80 bars, respectively. A maximum penetration depth of ~450 m was achieved with this source.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent theoretical work in economic geography has shown that agglomeration forces can mitigate 'race-to-the-bottom' tax competition, by partly or fully offsetting firms' sensitivity to tax differentials. We test this proposition using data on firm births across Swiss municipalities. We find that corporate taxes deter firm births less in more spatially concentrated sectors. Firms in sectors with an agglomeration intensity in the top quintile are less than half as responsive to differences in corporate tax burdens as firms in sectors with an agglomeration intensity in the bottom quintile. Hence, agglomeration economies do appear to attenuate the impact of tax differentials on firms' location choices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lesions of anatomical brain networks result in functional disturbances of brain systems and behavior which depend sensitively, often unpredictably, on the lesion site. The availability of whole-brain maps of structural connections within the human cerebrum and our increased understanding of the physiology and large-scale dynamics of cortical networks allow us to investigate the functional consequences of focal brain lesions in a computational model. We simulate the dynamic effects of lesions placed in different regions of the cerebral cortex by recording changes in the pattern of endogenous ("resting-state") neural activity. We find that lesions produce specific patterns of altered functional connectivity among distant regions of cortex, often affecting both cortical hemispheres. The magnitude of these dynamic effects depends on the lesion location and is partly predicted by structural network properties of the lesion site. In the model, lesions along the cortical midline and in the vicinity of the temporo-parietal junction result in large and widely distributed changes in functional connectivity, while lesions of primary sensory or motor regions remain more localized. The model suggests that dynamic lesion effects can be predicted on the basis of specific network measures of structural brain networks and that these effects may be related to known behavioral and cognitive consequences of brain lesions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This manual captures the experience of practitioners in the Iowa Department of Transportation’s (Iowa DOT’s) Office of Location and Environment (OLE). It also documents the need for coordinated project development efforts during the highway project planning, or location study phase and engineering design. The location study phase establishes: * The definition of, and need for, the highway improvement project * The range of alternatives and many key attributes of the project’s design * The recommended alternative, its impacts, and the agreed-to conditions for project approval The location study process involves developing engineering alternatives, collecting engineering and environmental data, and completing design refinements to accomplish functional designs. The items above also embody the basic content required for projects compliant with the National Environmental Policy Act (NEPA) of 19691, which directs federal agencies to use a systematic, interdisciplinary approach during the planning process whenever proposed actions (or “projects”) have the potential for environmental impacts. In doing so, NEPA requires coordination with stakeholders, review, comment, and public disclosure. Are location studies and environmental studies more about the process or the documents? If properly conducted, they concern both—unbiased and reasonable processes with quality and timely documents. In essence, every project is a story that needs to be told. Engineering and environmental regulations and guidance, as documented in this manual, will help project staff and managers become better storytellers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIATM HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. METHODS: Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. RESULTS: HIV-1 RNA <50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. CONCLUSIONS: The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using head-mounted eye tracker material, we assessed spatial recognition abilities (e.g., reaction to object permutation, removal or replacement with a new object) in participants with intellectual disabilities. The "Intellectual Disabilities (ID)" group (n=40) obtained a score totalling a 93.7% success rate, whereas the "Normal Control" group (n=40) scored 55.6% and took longer to fix their attention on the displaced object. The participants with an intellectual disability thus had a more accurate perception of spatial changes than controls. Interestingly, the ID participants were more reactive to object displacement than to removal of the object. In the specific test of novelty detection, however, the scores were similar, the two groups approaching 100% detection. Analysis of the strategies expressed by the ID group revealed that they engaged in more systematic object checking and were more sensitive than the control group to changes in the structure of the environment. Indeed, during the familiarisation phase, the "ID" group explored the collection of objects more slowly, and fixed their gaze for a longer time upon a significantly lower number of fixation points during visual sweeping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this paper is to explore the relative importance of each of Marshall's agglomeration mechanisms by examining the location of new manufacturing firms in Spain. In particular, we estimate the count of new firms by industry and location as a function of (pre-determined) local employment levels in industries that: 1) use similar workers (labor market pooling); 2) have a customer- supplier relationship (input sharing); and 3) use similar technologies (knowledge spillovers). We examine the variation in the creation of new firms across cities and across municipalities within large cities to shed light on the geographical scope of each of the three agglomeration mechanisms. We find evidence of all three agglomeration mechanisms, although their incidence differs depending on the geographical scale of the analysis.