982 resultados para error-location number
Resumo:
As traffic congestion exuberates and new roadway construction is severely constrained because of limited availability of land, high cost of land acquisition, and communities' opposition to the building of major roads, new solutions have to be sought to either make roadway use more efficient or reduce travel demand. There is a general agreement that travel demand is affected by land use patterns. However, traditional aggregate four-step models, which are the prevailing modeling approach presently, assume that traffic condition will not affect people's decision on whether to make a trip or not when trip generation is estimated. Existing survey data indicate, however, that differences exist in trip rates for different geographic areas. The reasons for such differences have not been carefully studied, and the success of quantifying the influence of land use on travel demand beyond employment, households, and their characteristics has been limited to be useful to the traditional four-step models. There may be a number of reasons, such as that the representation of influence of land use on travel demand is aggregated and is not explicit and that land use variables such as density and mix and accessibility as measured by travel time and congestion have not been adequately considered. This research employs the artificial neural network technique to investigate the potential effects of land use and accessibility on trip productions. Sixty two variables that may potentially influence trip production are studied. These variables include demographic, socioeconomic, land use and accessibility variables. Different architectures of ANN models are tested. Sensitivity analysis of the models shows that land use does have an effect on trip production, so does traffic condition. The ANN models are compared with linear regression models and cross-classification models using the same data. The results show that ANN models are better than the linear regression models and cross-classification models in terms of RMSE. Future work may focus on finding a representation of traffic condition with existing network data and population data which might be available when the variables are needed to in prediction.
Resumo:
Correct specification of the simple location quotients in regionalizing the national direct requirements table is essential to the accuracy of regional input-output multipliers. The purpose of this research is to examine the relative accuracy of these multipliers when earnings, employment, number of establishments, and payroll data specify the simple location quotients. For each specification type, I derive a column of total output multipliers and a column of total income multipliers. These multipliers are based on the 1987 benchmark input-output accounts of the U.S. economy and 1988-1992 state of Florida data. Error sign tests, and Standardized Mean Absolute Deviation (SMAD) statistics indicate that the output multiplier estimates overestimate the output multipliers published by the Department of Commerce-Bureau of Economic Analysis (BEA) for the state of Florida. In contrast, the income multiplier estimates underestimate the BEA's income multipliers. For a given multiplier type, the Spearman-rank correlation analysis shows that the multiplier estimates and the BEA multipliers have statistically different rank ordering of row elements. The above tests also find no significant different differences, both in size and ranking distributions, among the vectors of multiplier estimates.
Resumo:
A compilation of basal dates of peatland initiation across the northern high latitudes, associated metadata including location, age, raw and calibrated radiocarbon ages, and associated references. Includes previously published datasets from sources below as well as 365 new data points.
Resumo:
Authigenic carbonate deposits have been sampled with the remotely operated vehicle 'MARUM-QUEST 4000 m' from five methane seeps between 731 and 1823 m water depth along the convergent Makran continental margin, offshore Pakistan (northern Arabian Sea). Two seeps on the upper slope are located within the oxygen minimum zone (OMZ; ca. 100 to 1100 m water depth), the other sites are situated in oxygenated water below the OMZ (below 1100 m water depth). The carbonate deposits vary with regard to their spatial extent, sedimentary fabrics, and associated seep fauna: Within the OMZ, carbonates are spatially restricted and associated with microbial mats, whereas in the oxygenated zone below the OMZ extensive carbonate crusts are exposed on the seafloor with abundant metazoans (bathymodiolin mussels, tube worms, galatheid crabs). Aragonite and Mg-calcite are the dominant carbonate minerals, forming common early diagenetic microcrystalline cement and clotted to radial-fibrous cement. The delta18O carbonate values range from 1.3 to 4.2 per mil V-PDB, indicating carbonate precipitation at ambient bottom-water temperature in shallow sediment depth. Extremely low delta13Ccarbonate values (as low - 54.6per mil V-PDB) point to anaerobic oxidation of methane (AOM) as trigger for carbonate precipitation, with biogenic methane as dominant carbon source. Prevalence of biogenic methane in the seepage gas is corroborated by delta13C methane values ranging from - 70.3 to - 66.7per mil V-PDB, and also by back-calculations considering delta 13C methane values of carbonate and incorporated lipid biomarkers.
Resumo:
Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.
Resumo:
Heating, ventilation, air conditioning (HVAC) systems are significant consumers of energy, however building management systems do not typically operate them in accordance with occupant movements. Due to the delayed response of HVAC systems, prediction of occupant locations is necessary to maximize energy efficiency. We present an approach to occupant location prediction based on association rule mining, allowing prediction based on historical occupant locations. Association rule mining is a machine learning technique designed to find any correlations which exist in a given dataset. Occupant location datasets have a number of properties which differentiate them from the market basket datasets that association rule mining was originally designed for. This thesis adapts the approach to suit such datasets, focusing the rule mining process on patterns which are useful for location prediction. This approach, named OccApriori, allows for the prediction of occupants’ next locations as well as their locations further in the future, and can take into account any available data, for example the day of the week, the recent movements of the occupant, and timetable data. By integrating an existing extension of association rule mining into the approach, it is able to make predictions based on general classes of locations as well as specific locations.
Resumo:
Methane hydrate is an ice-like substance that is stable at high-pressure and low temperature in continental margin sediments. Since the discovery of a large number of gas flares at the landward termination of the gas hydrate stability zone off Svalbard, there has been concern that warming bottom waters have started to dissociate large amounts of gas hydrate and that the resulting methane release may possibly accelerate global warming. Here, we can corroborate that hydrates play a role in the observed seepage of gas, but we present evidence that seepage off Svalbard has been ongoing for at least three thousand years and that seasonal fluctuations of 1-2°C in the bottom-water temperature cause periodic gas hydrate formation and dissociation, which focus seepage at the observed sites.
Resumo:
Strontium isotopic compositions of acetic acid (HOAc) leachate fractions of eight manganese oxide deposits from the modern seafloor, and of twenty-one buried manganese nodules from Cretaceous to Recent sediments in DSDP/ODP cores were measured. ratios of HOAc leachates in all modern seafloor manganese oxides of various origins are identical with present seawater. The ratios of the HOAc leachates of buried nodules from DSDP/ODP cores are significantly lower than those of nodules from the modern seafloor and are mostly identical with coeval seawater values estimated from the age of associated sediments. It is suggested that the buried nodules in DSDP/ODP cores are not artifacts transported from the present seafloor during the drilling process, but are in situ fossil deposits from the past deep-sea floor during Cretaceous to Quaternary periods. The formation of deep-sea fossil nodules prior to the formation of Antarctic Bottom Water (AABW) indicates that the circulation of oxygenated deep seawaters have activately deposited manganese oxides since the Eocene Epoch, or earlier.
Resumo:
This study presents aggradation rates supplemented for the first time by carbonate accumulation rates from Mediterranean cold-water coral sites considering three different regional and geomorphological settings: (i) a cold-water coral ridge (eastern Melilla coral province, Alboran Sea), (ii) a cold-water coral rubble talus deposit at the base of a submarine cliff (Urania Bank, Strait of Sicily) and (iii) a cold-water coral deposit rooted on a predefined topographic high overgrown by cold-water corals (Santa Maria di Leuca coral province, Ionian Sea). The mean aggradation rates of the respective cold-water coral deposits vary between 10 and 530 cm kyr?1 and the mean carbonate accumulation rates range between 8 and 396 g cm?2 kyr?1 with a maximum of 503 g cm?2 kyr?1 reached in the eastern Melilla coral province. Compared to other deep-water depositional environments the Mediterranean cold-water coral sites reveal significantly higher carbonate accumulation rates that were even in the range of the highest productive shallow-water Mediterranean carbonate factories (e.g. Cladocora caespitosa coral reefs). Focusing exclusively on cold-water coral occurrences, the carbonate accumulation rates of the Mediterranean cold-water coral sites are in the lower range of those obtained for the prolific Norwegian coral occurrences, but exhibit much higher rates than the cold-water coral mounds off Ireland. This study clearly indicates that cold-water corals have the potential to act as important carbonate factories and regional carbonate sinks within the Mediterranean Sea. Moreover, the data highlight the potential of cold-water corals to store carbonate with rates in the range of tropical shallow-water reefs. In order to evaluate the contribution of the cold-water coral carbonate factory to the regional or global carbonate/carbon cycle, an improved understanding of the temporal and spatial variability in aggradation and carbonate accumulation rates and areal estimates of the respective regions is needed.
Resumo:
We consider how three firms compete in a Salop location model and how cooperation in location choice by two of these firms affects the outcomes. We con- sider the classical case of linear transportation costs as a two-stage game in which the firms select first a location on a unit circle along which consumers are dispersed evenly, followed by the competitive selection of a price. Standard analysis restricts itself to purely competitive selection of location; instead, we focus on the situation in which two firms collectively decide about location, but price their products competitively after the location choice has been effectuated. We show that such partial coordination of location is beneficial to all firms, since it reduces the number of equilibria significantly and, thereby, the resulting coordination problem. Subsequently, we show that the case of quadratic transportation costs changes the main conclusions only marginally.
Resumo:
In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.
Resumo:
Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.
Resumo:
Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.
Resumo:
Global Network for the Molecular Surveillance of Tuberculosis 2010: A. Miranda (Tuberculosis Laboratory of the National Institute of Health, Porto, Portugal)
Resumo:
The proliferation of new mobile communication devices, such as smartphones and tablets, has led to an exponential growth in network traffic. The demand for supporting the fast-growing consumer data rates urges the wireless service providers and researchers to seek a new efficient radio access technology, which is the so-called 5G technology, beyond what current 4G LTE can provide. On the other hand, ubiquitous RFID tags, sensors, actuators, mobile phones and etc. cut across many areas of modern-day living, which offers the ability to measure, infer and understand the environmental indicators. The proliferation of these devices creates the term of the Internet of Things (IoT). For the researchers and engineers in the field of wireless communication, the exploration of new effective techniques to support 5G communication and the IoT becomes an urgent task, which not only leads to fruitful research but also enhance the quality of our everyday life. Massive MIMO, which has shown the great potential in improving the achievable rate with a very large number of antennas, has become a popular candidate. However, the requirement of deploying a large number of antennas at the base station may not be feasible in indoor scenarios. Does there exist a good alternative that can achieve similar system performance to massive MIMO for indoor environment? In this dissertation, we address this question by proposing the time-reversal technique as a counterpart of massive MIMO in indoor scenario with the massive multipath effect. It is well known that radio signals will experience many multipaths due to the reflection from various scatters, especially in indoor environments. The traditional TR waveform is able to create a focusing effect at the intended receiver with very low transmitter complexity in a severe multipath channel. TR's focusing effect is in essence a spatial-temporal resonance effect that brings all the multipaths to arrive at a particular location at a specific moment. We show that by using time-reversal signal processing, with a sufficiently large bandwidth, one can harvest the massive multipaths naturally existing in a rich-scattering environment to form a large number of virtual antennas and achieve the desired massive multipath effect with a single antenna. Further, we explore the optimal bandwidth for TR system to achieve maximal spectral efficiency. Through evaluating the spectral efficiency, the optimal bandwidth for TR system is found determined by the system parameters, e.g., the number of users and backoff factor, instead of the waveform types. Moreover, we investigate the tradeoff between complexity and performance through establishing a generalized relationship between the system performance and waveform quantization in a practical communication system. It is shown that a 4-bit quantized waveforms can be used to achieve the similar bit-error-rate compared to the TR system with perfect precision waveforms. Besides 5G technology, Internet of Things (IoT) is another terminology that recently attracts more and more attention from both academia and industry. In the second part of this dissertation, the heterogeneity issue within the IoT is explored. One of the significant heterogeneity considering the massive amount of devices in the IoT is the device heterogeneity, i.e., the heterogeneous bandwidths and associated radio-frequency (RF) components. The traditional middleware techniques result in the fragmentation of the whole network, hampering the objects interoperability and slowing down the development of a unified reference model for the IoT. We propose a novel TR-based heterogeneous system, which can address the bandwidth heterogeneity and maintain the benefit of TR at the same time. The increase of complexity in the proposed system lies in the digital processing at the access point (AP), instead of at the devices' ends, which can be easily handled with more powerful digital signal processor (DSP). Meanwhile, the complexity of the terminal devices stays low and therefore satisfies the low-complexity and scalability requirement of the IoT. Since there is no middleware in the proposed scheme and the additional physical layer complexity concentrates on the AP side, the proposed heterogeneous TR system better satisfies the low-complexity and energy-efficiency requirement for the terminal devices (TDs) compared with the middleware approach.