948 resultados para Fault location algorithms
Resumo:
The protective effect of cations, especially Ca and Mg, against aluminum (Al) rhizotoxicity has been extensively investigated in the last decades. The mechanisms by which the process occurs are however only beginning to be elucidated. Six experiments were carried out here to characterize the protective effect of Mg application in relation to timing, location and crop specificity: Experiment 1 - Protective effect of Mg compared to Ca; Experiment 2 - Protective effect of Mg on distinct root classes of 15 soybean genotypes; Experiment 3 - Effect of timing of Mg supply on the response of soybean cvs. to Al; Experiment 4 - Investigating whether the Mg protective effect is apoplastic or simplastic using a split-root system; Experiment 5 - Protective effect of Mg supplied in solution or foliar spraying, and Experiment 6 - Protective effect of Mg on Al rhizotoxicity in other crops. It was found that the addition of 50 mmol L-1 Mg to solutions containing toxic Al increased Al tolerance in 15 soybean cultivars. This caused soybean cultivars known as Al-sensitive to behave as if they were tolerant. The protective action of Mg seems to require constant Mg supply in the external medium. Supplying Mg up to 6 h after root exposition to Al was sufficient to maintain normal soybean root growth, but root growth was not recovered by Mg addition 12 h after Al treatments. Mg application to half of the root system not exposed to Al was not sufficient to prevent Al toxicity on the other half exposed to Al without Mg in rooting medium, indicating the existence of an external protection mechanism of Mg. Foliar spraying with Mg also failed to decrease Al toxicity, indicating a possible apoplastic role of Mg. The protective effect of Mg appeared to be soybean-specific since Mg supply did not substantially improve root elongation in sorghum, wheat, corn, cotton, rice, or snap bean when grown in the presence of toxic Al concentrations.
Resumo:
This paper analyses empirically how differences in local taxes affect the intraregional location of new manufacturing plants. These effects are examined within the random profit maximization framework while accounting for the presence of different types of agglomeration economies (localization/ urbanization/ Jacobs¿ economies) at the municipal level. We look at the location decision of more than 10,000 establishments locating between 1996 and 2003 across more than 400 municipalities in Catalonia, a Spanish region. It is necessary to restrict the choice set to the local labor market and, above all, to control for agglomeration economies so as to identify the effects of taxes on the location of new establishments.
Resumo:
This paper analyses empirically how differences in local taxes affect the intraregional location of new manufacturing plants. These effects are examined within the random profit maximization framework while accounting for the presence of different types of agglomeration economies (localization/ urbanization/ Jacobs¿ economies) at the municipal level. We look at the location decision of more than 10,000 establishments locating between 1996 and 2003 across more than 400 municipalities in Catalonia, a Spanish region. It is necessary to restrict the choice set to the local labor market and, above all, to control for agglomeration economies so as to identify the effects of taxes on the location of new establishments.
Resumo:
The objective of this paper is to explore the relative importance of each of Marshall's agglomeration mechanisms by examining the location of new manufacturing firms in Spain. In particular, we estimate the count of new firms by industry and location as a function of (pre-determined) local employment levels in industries that: 1) use similar workers (labor market pooling); 2) have a customer- supplier relationship (input sharing); and 3) use similar technologies (knowledge spillovers). We examine the variation in the creation of new firms across cities and across municipalities within large cities to shed light on the geographical scope of each of the three agglomeration mechanisms. We find evidence of all three agglomeration mechanisms, although their incidence differs depending on the geographical scale of the analysis.
Resumo:
Durant el segle XIX, l'economia espanyola va transitar per les primeres etapes de la industrialització. Aquest procés es va donar en paral·lel a la integració del mercat domèstic de béns i factors, en un moment en què les reformes liberals i la construcció de la xarxa ferroviària, entre d'altres, van generar una important caiguda en els costos detransport. Al mateix temps que es donava aquesta progressiva integració del mercat domèstic espanyol, es van produir canvis significatius en la pauta de localització industrial. D'una banda, hi hagué un augment considerable de la concentració espacial de la indústria des de mitjans de segle XIX i fins a la Guerra Civil, i d¿altra, un increment de l'especialització regional. Ara bé, quines van ser les forces que van generar aquests canvis? Des d¿un punt de vista teòric, el model de Heckscher-Ohlin suggereix que la distribució a l'espai de l¿activitat econòmica ve determinada per l'avantatge comparativa dels territoris en funció de la dotació relativa de factors. Al seu torn, els models de Nova Geografia Econòmica (NEG) mostren l'existència d'una relació en forma de campana entre el procés d'integració econòmica i el grau de concentració geogràfica de l'activitat industrial. Aquest article examina empíricament els determinants de la localització industrial a Espanya entre 1856 i 1929, mitjançant l'estimació d¿un model que combina els elements de tipus Heckscher-Ohlin i els factors apuntats des de la NEG, amb l'objectiu de contrastar la força relativa dels arguments vinculats a aquestes dues interpretacions a l'hora de modular la localització de la indústria a Espanya. L'anàlisi dels resultats obtinguts mostra que tant la dotació de factors com els mecanismes de tipus NEG van ser elements determinants que expliquen la distribució geogràfica de la indústria des del segle XIX, tot i que la seva força relativa va anar variant amb el temps.
Resumo:
In the last five years, Deep Brain Stimulation (DBS) has become the most popular and effective surgical technique for the treatent of Parkinson's disease (PD). The Subthalamic Nucleus (STN) is the usual target involved when applying DBS. Unfortunately, the STN is in general not visible in common medical imaging modalities. Therefore, atlas-based segmentation is commonly considered to locate it in the images. In this paper, we propose a scheme that allows both, to perform a comparison between different registration algorithms and to evaluate their ability to locate the STN automatically. Using this scheme we can evaluate the expert variability against the error of the algorithms and we demonstrate that automatic STN location is possible and as accurate as the methods currently used.
Resumo:
PURPOSE: To determine the lower limit of dose reduction with hybrid and fully iterative reconstruction algorithms in detection of endoleaks and in-stent thrombus of thoracic aorta with computed tomographic (CT) angiography by applying protocols with different tube energies and automated tube current modulation. MATERIALS AND METHODS: The calcification insert of an anthropomorphic cardiac phantom was replaced with an aortic aneurysm model containing a stent, simulated endoleaks, and an intraluminal thrombus. CT was performed at tube energies of 120, 100, and 80 kVp with incrementally increasing noise indexes (NIs) of 16, 25, 34, 43, 52, 61, and 70 and a 2.5-mm section thickness. NI directly controls radiation exposure; a higher NI allows for greater image noise and decreases radiation. Images were reconstructed with filtered back projection (FBP) and hybrid and fully iterative algorithms. Five radiologists independently analyzed lesion conspicuity to assess sensitivity and specificity. Mean attenuation (in Hounsfield units) and standard deviation were measured in the aorta to calculate signal-to-noise ratio (SNR). Attenuation and SNR of different protocols and algorithms were analyzed with analysis of variance or Welch test depending on data distribution. RESULTS: Both sensitivity and specificity were 100% for simulated lesions on images with 2.5-mm section thickness and an NI of 25 (3.45 mGy), 34 (1.83 mGy), or 43 (1.16 mGy) at 120 kVp; an NI of 34 (1.98 mGy), 43 (1.23 mGy), or 61 (0.61 mGy) at 100 kVp; and an NI of 43 (1.46 mGy) or 70 (0.54 mGy) at 80 kVp. SNR values showed similar results. With the fully iterative algorithm, mean attenuation of the aorta decreased significantly in reduced-dose protocols in comparison with control protocols at 100 kVp (311 HU at 16 NI vs 290 HU at 70 NI, P ≤ .0011) and 80 kVp (400 HU at 16 NI vs 369 HU at 70 NI, P ≤ .0007). CONCLUSION: Endoleaks and in-stent thrombus of thoracic aorta were detectable to 1.46 mGy (80 kVp) with FBP, 1.23 mGy (100 kVp) with the hybrid algorithm, and 0.54 mGy (80 kVp) with the fully iterative algorithm.
Resumo:
A mobile ad hoc network (MANET) is a decentralized and infrastructure-less network. This thesis aims to provide support at the system-level for developers of applications or protocols in such networks. To do this, we propose contributions in both the algorithmic realm and in the practical realm. In the algorithmic realm, we contribute to the field by proposing different context-aware broadcast and multicast algorithms in MANETs, namely six-shot broadcast, six-shot multicast, PLAN-B and ageneric algorithmic approach to optimize the power consumption of existing algorithms. For each algorithm we propose, we compare it to existing algorithms that are either probabilistic or context-aware, and then we evaluate their performance based on simulations. We demonstrate that in some cases, context-aware information, such as location or signal-strength, can improve the effciency. In the practical realm, we propose a testbed framework, namely ManetLab, to implement and to deploy MANET-specific protocols, and to evaluate their performance. This testbed framework aims to increase the accuracy of performance evaluation compared to simulations, while keeping the ease of use offered by the simulators to reproduce a performance evaluation. By evaluating the performance of different probabilistic algorithms with ManetLab, we observe that both simulations and testbeds should be used in a complementary way. In addition to the above original contributions, we also provide two surveys about system-level support for ad hoc communications in order to establish a state of the art. The first is about existing broadcast algorithms and the second is about existing middleware solutions and the way they deal with privacy and especially with location privacy. - Un réseau mobile ad hoc (MANET) est un réseau avec une architecture décentralisée et sans infrastructure. Cette thèse vise à fournir un support adéquat, au niveau système, aux développeurs d'applications ou de protocoles dans de tels réseaux. Dans ce but, nous proposons des contributions à la fois dans le domaine de l'algorithmique et dans celui de la pratique. Nous contribuons au domaine algorithmique en proposant différents algorithmes de diffusion dans les MANETs, algorithmes qui sont sensibles au contexte, à savoir six-shot broadcast,six-shot multicast, PLAN-B ainsi qu'une approche générique permettant d'optimiser la consommation d'énergie de ces algorithmes. Pour chaque algorithme que nous proposons, nous le comparons à des algorithmes existants qui sont soit probabilistes, soit sensibles au contexte, puis nous évaluons leurs performances sur la base de simulations. Nous montrons que, dans certains cas, des informations liées au contexte, telles que la localisation ou l'intensité du signal, peuvent améliorer l'efficience de ces algorithmes. Sur le plan pratique, nous proposons une plateforme logicielle pour la création de bancs d'essai, intitulé ManetLab, permettant d'implémenter, et de déployer des protocoles spécifiques aux MANETs, de sorte à évaluer leur performance. Cet outil logiciel vise à accroître la précision desévaluations de performance comparativement à celles fournies par des simulations, tout en conservant la facilité d'utilisation offerte par les simulateurs pour reproduire uneévaluation de performance. En évaluant les performances de différents algorithmes probabilistes avec ManetLab, nous observons que simulateurs et bancs d'essai doivent être utilisés de manière complémentaire. En plus de ces contributions principales, nous fournissons également deux états de l'art au sujet du support nécessaire pour les communications ad hoc. Le premier porte sur les algorithmes de diffusion existants et le second sur les solutions de type middleware existantes et la façon dont elles traitent de la confidentialité, en particulier celle de la localisation.
Resumo:
The role of rural demand-responsive transit is changing, and with that change is coming an increasing need for technology. As long as rural transit was limited to a type of social service transportation for a specific set of clients who primarily traveled in groups to common meal sites, work centers for the disabled, or clinics in larger communities, a preset calendar augmented by notes on a yellow legal pad was sufficient to develop schedules. Any individual trips were arranged at least 24 to 48 hours ahead of time and were carefully scheduled the night before in half-hour or twenty-minute windows by a dispatcher who knew every lane in the service area. Since it took hours to build the schedule, any last-minute changes could wreak havoc with the plans and raise the stress level in the dispatch office. Nevertheless, given these parameters, a manual scheduling system worked for a small demand-responsive operation.
Resumo:
Report produced by the Iowa Department of Transportation about Iowa Safety with Tools and Aggregation.
Resumo:
Durant el segle XIX, l'economia espanyola va transitar per les primeres etapes de la industrialització. Aquest procés es va donar en paral·lel a la integració del mercat domèstic de béns i factors, en un moment en què les reformes liberals i la construcció de la xarxa ferroviària, entre d'altres, van generar una important caiguda en els costos detransport. Al mateix temps que es donava aquesta progressiva integració del mercat domèstic espanyol, es van produir canvis significatius en la pauta de localització industrial. D'una banda, hi hagué un augment considerable de la concentració espacial de la indústria des de mitjans de segle XIX i fins a la Guerra Civil, i d¿altra, un increment de l'especialització regional. Ara bé, quines van ser les forces que van generar aquests canvis? Des d¿un punt de vista teòric, el model de Heckscher-Ohlin suggereix que la distribució a l'espai de l¿activitat econòmica ve determinada per l'avantatge comparativa dels territoris en funció de la dotació relativa de factors. Al seu torn, els models de Nova Geografia Econòmica (NEG) mostren l'existència d'una relació en forma de campana entre el procés d'integració econòmica i el grau de concentració geogràfica de l'activitat industrial. Aquest article examina empíricament els determinants de la localització industrial a Espanya entre 1856 i 1929, mitjançant l'estimació d¿un model que combina els elements de tipus Heckscher-Ohlin i els factors apuntats des de la NEG, amb l'objectiu de contrastar la força relativa dels arguments vinculats a aquestes dues interpretacions a l'hora de modular la localització de la indústria a Espanya. L'anàlisi dels resultats obtinguts mostra que tant la dotació de factors com els mecanismes de tipus NEG van ser elements determinants que expliquen la distribució geogràfica de la indústria des del segle XIX, tot i que la seva força relativa va anar variant amb el temps.
Resumo:
The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
The Monte Perdido thrust fault (southern Pyrenees) consists of a 6-m-thick interval of intensely deformed clay-bearing rocks. The fault zone is affected by a pervasive pressure solution seam and numerous shear surfaces. Calcite extensional-shear veins are present along the shear surfaces. The angular relationships between the two structures indicate that shear surfaces developed at a high angle (70°) to the local principal maximum stress axis r1. Two main stages of deformation are present. The first stage corresponds to the development of calcite shear veins by a combination of shear surface reactivation and extensional mode I rupture. The second stage of deformation corresponds to chlorite precipitation along the previously reactivated shear surfaces. The pore fluid factor k computed for the two deformation episodes indicates high fluid pressures during the Monte Perdido thrust activity. During the first stage of deformation, the reactivation of the shear surface was facilitated by a suprahydrostatic fluid pressure with a pore fluid factor kv equal to 0.89. For the second stage, the fluid pressure remained still high (with a k value ranging between 0.77 and 0.84) even with the presence of weak chlorite along the shear surfaces. Furthermore, evidence of hydrostatic fluid pressure during calcite cement precipitation supports that incremental shear surface reactivations are correlated with cyclic fluid pressure fluctuations consis- tent with a fault-valve model.