986 resultados para Square-law nonlinearity symbol timing estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation aimed to improve travel time estimation for the purpose of transportation planning by developing a travel time estimation method that incorporates the effects of signal timing plans, which were difficult to consider in planning models. For this purpose, an analytical model has been developed. The model parameters were calibrated based on data from CORSIM microscopic simulation, with signal timing plans optimized using the TRANSYT-7F software. Independent variables in the model are link length, free-flow speed, and traffic volumes from the competing turning movements. The developed model has three advantages compared to traditional link-based or node-based models. First, the model considers the influence of signal timing plans for a variety of traffic volume combinations without requiring signal timing information as input. Second, the model describes the non-uniform spatial distribution of delay along a link, this being able to estimate the impacts of queues at different upstream locations of an intersection and attribute delays to a subject link and upstream link. Third, the model shows promise of improving the accuracy of travel time prediction. The mean absolute percentage error (MAPE) of the model is 13% for a set of field data from Minnesota Department of Transportation (MDOT); this is close to the MAPE of uniform delay in the HCM 2000 method (11%). The HCM is the industrial accepted analytical model in the existing literature, but it requires signal timing information as input for calculating delays. The developed model also outperforms the HCM 2000 method for a set of Miami-Dade County data that represent congested traffic conditions, with a MAPE of 29%, compared to 31% of the HCM 2000 method. The advantages of the proposed model make it feasible for application to a large network without the burden of signal timing input, while improving the accuracy of travel time estimation. An assignment model with the developed travel time estimation method has been implemented in a South Florida planning model, which improved assignment results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation aims to improve the performance of existing assignment-based dynamic origin-destination (O-D) matrix estimation models to successfully apply Intelligent Transportation Systems (ITS) strategies for the purposes of traffic congestion relief and dynamic traffic assignment (DTA) in transportation network modeling. The methodology framework has two advantages over the existing assignment-based dynamic O-D matrix estimation models. First, it combines an initial O-D estimation model into the estimation process to provide a high confidence level of initial input for the dynamic O-D estimation model, which has the potential to improve the final estimation results and reduce the associated computation time. Second, the proposed methodology framework can automatically convert traffic volume deviation to traffic density deviation in the objective function under congested traffic conditions. Traffic density is a better indicator for traffic demand than traffic volume under congested traffic condition, thus the conversion can contribute to improving the estimation performance. The proposed method indicates a better performance than a typical assignment-based estimation model (Zhou et al., 2003) in several case studies. In the case study for I-95 in Miami-Dade County, Florida, the proposed method produces a good result in seven iterations, with a root mean square percentage error (RMSPE) of 0.010 for traffic volume and a RMSPE of 0.283 for speed. In contrast, Zhou's model requires 50 iterations to obtain a RMSPE of 0.023 for volume and a RMSPE of 0.285 for speed. In the case study for Jacksonville, Florida, the proposed method reaches a convergent solution in 16 iterations with a RMSPE of 0.045 for volume and a RMSPE of 0.110 for speed, while Zhou's model needs 10 iterations to obtain the best solution, with a RMSPE of 0.168 for volume and a RMSPE of 0.179 for speed. The successful application of the proposed methodology framework to real road networks demonstrates its ability to provide results both with satisfactory accuracy and within a reasonable time, thus establishing its potential usefulness to support dynamic traffic assignment modeling, ITS systems, and other strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In response to a crime epidemic afflicting Latin America since the early 1990s, several countries in the region have resorted to using heavy-force police or military units to physically retake territories de facto controlled by non-State criminal or insurgent groups. After a period of territory control, the heavy forces hand law enforcement functions in the retaken territories to regular police officers, with the hope that the territories and their populations will remain under the control of the state. To a varying degree, intensity, and consistency, Brazil, Colombia, Mexico, and Jamaica have adopted such policies since the mid-1990s. During such operations, governments need to pursue two interrelated objectives: to better establish the state’s physical presence and to realign the allegiance of the population in those areas toward the state and away from the non-State criminal entities. From the perspective of law enforcement, such operations entail several critical decisions and junctions, such as: Whether or not to announce the force insertion in advance. The decision trades off the element of surprise and the ability to capture key leaders of the criminal organizations against the ability to minimize civilian casualties and force levels. The latter, however, may allow criminals to go to ground and escape capture. Governments thus must decide whether they merely seek to displace criminal groups to other areas or maximize their decapitation capacity. Intelligence flows rarely come from the population. Often, rival criminal groups are the best source of intelligence. However, cooperation between the State and such groups that goes beyond using vetted intelligence provided by the groups, such as a State tolerance for militias, compromises the rule-of-law integrity of the State and ultimately can eviscerate even public safety gains. Sustaining security after initial clearing operations is at times even more challenging than conducting the initial operations. Although unlike the heavy forces, traditional police forces, especially if designed as community police, have the capacity to develop trust of the community and ultimately focus on crime prevention, developing such trust often takes a long time. To develop the community’s trust, regular police forces need to conduct frequent on-foot patrols with intensive nonthreatening interactions with the population and minimize the use of force. Moreover, sufficiently robust patrol units need to be placed in designated beats for substantial amount of time, often at least over a year. Establishing oversight mechanisms, including joint police-citizens’ boards, further facilities building trust in the police among the community. After disruption of the established criminal order, street crime often significantly rises and both the heavy-force and community-police units often struggle to contain it. The increase in street crime alienates the population of the retaken territory from the State. Thus developing a capacity to address street crime is critical. Moreover, the community police units tend to be vulnerable (especially initially) to efforts by displaced criminals to reoccupy the cleared territories. Losing a cleared territory back to criminal groups is extremely costly in terms of losing any established trust and being able to recover it. Rather than operating on a priori determined handover schedule, a careful assessment of the relative strength of regular police and criminal groups post-clearing operations is likely to be a better guide for timing the handover from heavy forces to regular police units. Cleared territories often experience not only a peace dividend, but also a peace deficit – in the rise new serious crime (in addition to street crime). Newly – valuable land and other previously-inaccessible resources can lead to land speculation and forced displacement; various other forms of new crime can also significantly rise. Community police forces often struggle to cope with such crime, especially as it is frequently linked to legal business. Such new crime often receives little to no attention in the design of the operations to retake territories from criminal groups. But without developing an effective response to such new crime, the public safety gains of the clearing operations can be altogether lost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the wake of the “9-11” terrorists' attacks, the U.S. Government has turned to information technology (IT) to address a lack of information sharing among law enforcement agencies. This research determined if and how information-sharing technology helps law enforcement by examining the differences in perception of the value of IT between law enforcement officers who have access to automated regional information sharing and those who do not. It also examined the effect of potential intervening variables such as user characteristics, training, and experience, on the officers' evaluation of IT. The sample was limited to 588 officers from two sheriff's offices; one of them (the study group) uses information sharing technology, the other (the comparison group) does not. Triangulated methodologies included surveys, interviews, direct observation, and a review of agency records. Data analysis involved the following statistical methods: descriptive statistics, Chi-Square, factor analysis, principal component analysis, Cronbach's Alpha, Mann-Whitney tests, analysis of variance (ANOVA), and Scheffe' post hoc analysis. ^ Results indicated a significant difference between groups: the study group perceived information sharing technology as being a greater factor in solving crime and in increasing officer productivity. The study group was more satisfied with the data available to it. As to the number of arrests made, information sharing technology did not make a difference. Analysis of the potential intervening variables revealed several remarkable results. The presence of a strong performance management imperative (in the comparison sheriff's office) appeared to be a factor in case clearances and arrests, technology notwithstanding. As to the influence of user characteristics, level of education did not influence a user's satisfaction with technology, but user-satisfaction scores differed significantly among years of experience as a law enforcement officer and the amount of computer training, suggesting a significant but weak relationship. ^ Therefore, this study finds that information sharing technology assists law enforcement officers in doing their jobs. It also suggests that other variables such as computer training, experience, and management climate should be accounted for when assessing the impact of information technology. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variants of adaptive Bayesian procedures for estimating the 5% point on a psychometric function were studied by simulation. Bias and standard error were the criteria to evaluate performance. The results indicated a superiority of (a) uniform priors, (b) model likelihood functions that are odd symmetric about threshold and that have parameter values larger than their counterparts in the psychometric function, (c) stimulus placement at the prior mean, and (d) estimates defined as the posterior mean. Unbiasedness arises in only 10 trials, and 20 trials ensure constant standard errors. The standard error of the estimates equals 0.617 times the inverse of the square root of the number of trials. Other variants yielded bias and larger standard errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Li-ion batteries have been widely used in electric vehicles, and battery internal state estimation plays an important role in the battery management system. However, it is technically challenging, in particular, for the estimation of the battery internal temperature and state-ofcharge (SOC), which are two key state variables affecting the battery performance. In this paper, a novel method is proposed for realtime simultaneous estimation of these two internal states, thus leading to a significantly improved battery model for realtime SOC estimation. To achieve this, a simplified battery thermoelectric model is firstly built, which couples a thermal submodel and an electrical submodel. The interactions between the battery thermal and electrical behaviours are captured, thus offering a comprehensive description of the battery thermal and electrical behaviour. To achieve more accurate internal state estimations, the model is trained by the simulation error minimization method, and model parameters are optimized by a hybrid optimization method combining a meta-heuristic algorithm and the least square approach. Further, timevarying model parameters under different heat dissipation conditions are considered, and a joint extended Kalman filter is used to simultaneously estimate both the battery internal states and time-varying model parameters in realtime. Experimental results based on the testing data of LiFePO4 batteries confirm the efficacy of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les systèmes de communication optique avec des formats de modulation avancés sont actuellement l’un des sujets de recherche les plus importants dans le domaine de communication optique. Cette recherche est stimulée par les exigences pour des débits de transmission de donnée plus élevés. Dans cette thèse, on examinera les techniques efficaces pour la modulation avancée avec une détection cohérente, et multiplexage par répartition en fréquence orthogonale (OFDM) et multiples tonalités discrètes (DMT) pour la détection directe et la détection cohérente afin d’améliorer la performance de réseaux optiques. Dans la première partie, nous examinons la rétropropagation avec filtre numérique (DFBP) comme une simple technique d’atténuation de nonlinéarité d’amplificateur optique semiconducteur (SOA) dans le système de détection cohérente. Pour la première fois, nous démontrons expérimentalement l’efficacité de DFBP pour compenser les nonlinéarités générées par SOA dans un système de détection cohérente porteur unique 16-QAM. Nous comparons la performance de DFBP avec la méthode de Runge-Kutta quatrième ordre. Nous examinons la sensibilité de performance de DFBP par rapport à ses paramètres. Par la suite, nous proposons une nouvelle méthode d’estimation de paramètre pour DFBP. Finalement, nous démontrons la transmission de signaux de 16-QAM aux taux de 22 Gbaud sur 80km de fibre optique avec la technique d’estimation de paramètre proposée pour DFBP. Dans la deuxième partie, nous nous concentrons sur les techniques afin d’améliorer la performance des systèmes OFDM optiques en examinent OFDM optiques cohérente (CO-OFDM) ainsi que OFDM optiques détection directe (DDO-OFDM). Premièrement, nous proposons une combinaison de coupure et prédistorsion pour compenser les distorsions nonlinéaires d’émetteur de CO-OFDM. Nous utilisons une interpolation linéaire par morceaux (PLI) pour charactériser la nonlinéarité d’émetteur. Dans l’émetteur nous utilisons l’inverse de l’estimation de PLI pour compenser les nonlinéarités induites à l’émetteur de CO-OFDM. Deuxièmement, nous concevons des constellations irrégulières optimisées pour les systèmes DDO-OFDM courte distance en considérant deux modèles de bruit de canal. Nous démontrons expérimentalement 100Gb/s+ OFDM/DMT avec la détection directe en utilisant les constellations QAM optimisées. Dans la troisième partie, nous proposons une architecture réseaux optiques passifs (PON) avec DDO-OFDM pour la liaison descendante et CO-OFDM pour la liaison montante. Nous examinons deux scénarios pour l’allocations de fréquence et le format de modulation des signaux. Nous identifions la détérioration limitante principale du PON bidirectionnelle et offrons des solutions pour minimiser ses effets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to increasing integration density and operating frequency of today's high performance processors, the temperature of a typical chip can easily exceed 100 degrees Celsius. However, the runtime thermal state of a chip is very hard to predict and manage due to the random nature in computing workloads, as well as the process, voltage and ambient temperature variability (together called PVT variability). The uneven nature (both in time and space) of the heat dissipation of the chip could lead to severe reliability issues and error-prone chip behavior (e.g. timing errors). Many dynamic power/thermal management techniques have been proposed to address this issue such as dynamic voltage and frequency scaling (DVFS), clock gating and etc. However, most of such techniques require accurate knowledge of the runtime thermal state of the chip to make efficient and effective control decisions. In this work we address the problem of tracking and managing the temperature of microprocessors which include the following sub-problems: (1) how to design an efficient sensor-based thermal tracking system on a given design that could provide accurate real-time temperature feedback; (2) what statistical techniques could be used to estimate the full-chip thermal profile based on very limited (and possibly noise-corrupted) sensor observations; (3) how do we adapt to changes in the underlying system's behavior, since such changes could impact the accuracy of our thermal estimation. The thermal tracking methodology proposed in this work is enabled by on-chip sensors which are already implemented in many modern processors. We first investigate the underlying relationship between heat distribution and power consumption, then we introduce an accurate thermal model for the chip system. Based on this model, we characterize the temperature correlation that exists among different chip modules and explore statistical approaches (such as those based on Kalman filter) that could utilize such correlation to estimate the accurate chip-level thermal profiles in real time. Such estimation is performed based on limited sensor information because sensors are usually resource constrained and noise-corrupted. We also took a further step to extend the standard Kalman filter approach to account for (1) nonlinear effects such as leakage-temperature interdependency and (2) varying statistical characteristics in the underlying system model. The proposed thermal tracking infrastructure and estimation algorithms could consistently generate accurate thermal estimates even when the system is switching among workloads that have very distinct characteristics. Through experiments, our approaches have demonstrated promising results with much higher accuracy compared to existing approaches. Such results can be used to ensure thermal reliability and improve the effectiveness of dynamic thermal management techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pesticide residues in food and environment pose serious health risks to human beings. Plant protection laws, among other things, regulate misuse of agricultural pesticides. Compliance with such laws consequently reduces risks of pesticide residues in food and the environment. Studies were conducted to assess the compliance with plant protection laws among tomato farmers in Mvomero District, Morogoro Region, Tanzania. Compliance was assessed by examining pesticide use practices that are regulated by the Tanzanian Plant Protection Act (PPA) of 1997. A total of 91 tomato farmers were interviewed using a structured questionnaire. Purposive sampling was used in selecting at least 30 respondent farmers from each of the three villages of Msufini, Mlali and Doma in Mvomero District, Morogoro Region. Simple Random Sampling was used to obtain respondents from the sampling frame. Individual and social factors were examined on how they could affect pesticide use practices regulated by the law. Descriptive statistics, mainly frequency, were used to analyze the data while associations between variables were determined using Chi-Square and logistic regression model. The results showed that respondents were generally aware of the existence of laws on agriculture, environment and consumer health, although none of them could name a specific Act. The results revealed further that 94.5% of the farmers read instructions on the pesticides label. However, only 21% used the correct doses of pesticides, 40.7% stored pesticides in special stores, 68.1% used protective gear, while 94.5% always read instructions on the label before using a pesticide product. Training influenced the application rate of pesticide (p < 0.001) while awareness of agricultural laws significantly influenced farmers’ tendency to read information on the labels (p < 0.001). The results showed further that education significantly influenced the use of protective gears by farmers (p = 0.042). Education also significantly affected the manner in which farmers stored pesticide-applying equipment (p = 0.024). Furthermore, farmers’ awareness of environmental laws significantly (p = 0.03) affected farmers’ disposal of empty pesticide containers. Results of this study suggest the need for express provisions on safe use and handling of pesticides and related offences in the Act, and that compliance should be achieved through education rather than coercion. Results also suggest establishment of pesticide disposal mechanisms and structures to reduce unsafe disposal of pesticide containers. It is recommended that farmers should be educated and trained on proper use of pesticides. Farmers’ awareness on laws affecting food, environment and agriculture should be improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Licensed under a Creative Commons Attribution 4.0 International License.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The loss of prestressing force over time influences the long-term deflection of the prestressed concrete element. Prestress losses are inherently complex due to the interaction of concrete creep, concrete shrinkage, and steel relaxation. Implementing advanced materials such as ultra-high performance concrete (UHPC) further complicates the estimation of prestress losses because of the changes in material models dependent on curing regime. Past research shows compressive creep is "locked in" when UHPC cylinders are subjected to thermal treatment before being loaded in compression. However, the current precasting manufacturing process would typically load the element (through prestressing strand release from the prestressing bed) before the element would be taken to the curing facility. Members of many ages are stored until curing could be applied to all of them at once. This research was conducted to determine the impact of variable curing times for UHPC on the prestress losses, and hence deflections. Three UHPC beams, a rectangular section, a modified bulb tee section, and a pi-girder, were assessed for losses and deflections using an incremental time step approach and material models specific to UHPC based on compressive creep and shrinkage testing. Results show that although it is important for prestressed UHPC beams to be thermally treated, to "lock in" material properties, the timing of thermal treatment leads to negligible differences in long-term deflections. Results also show that for UHPC elements that are thermally treated, changes in deflection are caused only by external loads because prestress losses are "locked-in" following thermal treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pitch Estimation, also known as Fundamental Frequency (F0) estimation, has been a popular research topic for many years, and is still investigated nowadays. The goal of Pitch Estimation is to find the pitch or fundamental frequency of a digital recording of a speech or musical notes. It plays an important role, because it is the key to identify which notes are being played and at what time. Pitch Estimation of real instruments is a very hard task to address. Each instrument has its own physical characteristics, which reflects in different spectral characteristics. Furthermore, the recording conditions can vary from studio to studio and background noises must be considered. This dissertation presents a novel approach to the problem of Pitch Estimation, using Cartesian Genetic Programming (CGP).We take advantage of evolutionary algorithms, in particular CGP, to explore and evolve complex mathematical functions that act as classifiers. These classifiers are used to identify piano notes pitches in an audio signal. To help us with the codification of the problem, we built a highly flexible CGP Toolbox, generic enough to encode different kind of programs. The encoded evolutionary algorithm is the one known as 1 + , and we can choose the value for . The toolbox is very simple to use. Settings such as the mutation probability, number of runs and generations are configurable. The cartesian representation of CGP can take multiple forms and it is able to encode function parameters. It is prepared to handle with different type of fitness functions: minimization of f(x) and maximization of f(x) and has a useful system of callbacks. We trained 61 classifiers corresponding to 61 piano notes. A training set of audio signals was used for each of the classifiers: half were signals with the same pitch as the classifier (true positive signals) and the other half were signals with different pitches (true negative signals). F-measure was used for the fitness function. Signals with the same pitch of the classifier that were correctly identified by the classifier, count as a true positives. Signals with the same pitch of the classifier that were not correctly identified by the classifier, count as a false negatives. Signals with different pitch of the classifier that were not identified by the classifier, count as a true negatives. Signals with different pitch of the classifier that were identified by the classifier, count as a false positives. Our first approach was to evolve classifiers for identifying artifical signals, created by mathematical functions: sine, sawtooth and square waves. Our function set is basically composed by filtering operations on vectors and by arithmetic operations with constants and vectors. All the classifiers correctly identified true positive signals and did not identify true negative signals. We then moved to real audio recordings. For testing the classifiers, we picked different audio signals from the ones used during the training phase. For a first approach, the obtained results were very promising, but could be improved. We have made slight changes to our approach and the number of false positives reduced 33%, compared to the first approach. We then applied the evolved classifiers to polyphonic audio signals, and the results indicate that our approach is a good starting point for addressing the problem of Pitch Estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to correlate the pre-operative imaging, vascularity of the proximal pole, and histology of the proximal pole bone of established scaphoid fracture non-union. This was a prospective non-controlled experimental study. Patients were evaluated pre-operatively for necrosis of the proximal scaphoid fragment by radiography, computed tomography (CT) and magnetic resonance imaging (MRI). Vascular status of the proximal scaphoid was determined intra-operatively, demonstrating the presence or absence of puncate bone bleeding. Samples were harvested from the proximal scaphoid fragment and sent for pathological examination. We determined the association between the imaging and intra-operative examination and histological findings. We evaluated 19 male patients diagnosed with scaphoid nonunion. CT evaluation showed no correlation to scaphoid proximal fragment necrosis. MRI showed marked low signal intensity on T1-weighted images that confirmed the histological diagnosis of necrosis in the proximal scaphoid fragment in all patients. Intra-operative assessment showed that 90% of bones had absence of intra-operative puncate bone bleeding, which was confirmed necrosis by microscopic examination. In scaphoid nonunion MRI images with marked low signal intensity on T1-weighted images and the absence of intra-operative puncate bone bleeding are strong indicatives of osteonecrosis of the proximal fragment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effects of ionic strength on ions in aqueous solutions are quite relevant, especially for biochemical systems, in which proteins and amino acids are involved. The teaching of this topic and more specifically, the Debye-Hückel limiting law, is central in chemistry undergraduate courses. In this work, we present a description of an experimental procedure based on the color change of aqueous solutions of bromocresol green (BCG), driven by addition of electrolyte. The contribution of charge product (z+|z-|) to the Debye-Hückel limiting law is demonstrated when the effects of NaCl and Na2SO4 on the color of BCG solutions are compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reintroduces the discussion about stress-timing in Brazilian Portuguese (BP). It begins by surveying some phonetic and phonological issues raised by the syllable- vs stress-timed dichotomy which culminated with the emergence of the p-center notion. Strict considerations of timing of V-V units and stress groups are taken into account to analyze the long term coupling of two basic oscillators (vowel and stress flow). This coupling allows a two-parameter characterization of language rhythms (coupling strength and speech rate) revealing that BP utterances present a high-degree of syllable-timing. A comparison with other languages, including European Portuguese, is also presented. The results analyzed indicate that Major's arguments for considering Portuguese (sic) as stress-timing are misleading.