969 resultados para frequency scaling factors


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes a theoretical explanation of the variations of the sediment delivery ratio (SDR) versus catchment area relationships and the complex patterns in the behavior of sediment transfer processes at catchment scale. Taking into account the effects of erosion source types, deposition, and hydrological controls, we propose a simple conceptual model that consists of two linear stores arranged in series: a hillslope store that addresses transport to the nearest streams and a channel store that addresses sediment routing in the channel network. The model identifies four dimensionless scaling factors, which enable us to analyze a variety of effects on SDR estimation, including (1) interacting processes of erosion sources and deposition, (2) different temporal averaging windows, and (3) catchment runoff response. We show that the interactions between storm duration and hillslope/channel travel times are the major controls of peak-value-based sediment delivery and its spatial variations. The interplay between depositional timescales and the travel/residence times determines the spatial variations of total-volume-based SDR. In practical terms this parsimonious, minimal complexity model could provide a sound physical basis for diagnosing catchment to catchment variability of sediment transport if the proposed scaling factors can be quantified using climatic and catchment properties.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The observation that performance in many visual tasks can be made independent of eccentricity by increasing the size of peripheral stimuli according to the cortical magnification factor has dominated studies of peripheral vision for many years. However, it has become evident that the cortical magnification factor cannot be successfully applied to all tasks. To find out why, several tasks were studied using spatial scaling, a method which requires no pre-determined scaling factors (such as those predicted from cortical magnification) to magnify the stimulus at any eccentricity. Instead, thresholds are measured at the fovea and in the periphery using a series of stimuli, all of which are simply magnified versions of one another. Analysis of the data obtained in this way reveals the value of the parameter E2, the eccentricity at which foveal stimulus size must double in order to maintain performance equivalent to that at the fovea. The tasks investigated include hyperacuities (vernier acuity, bisection acuity, spatial interval discrimination, referenced displacement detection, and orientation discrimination), unreferenced instantaneous and gradual movement, flicker sensitivity, and face discrimination. In all cases tasks obeyed the principle of spatial scaling since performance in the periphery could be equated to that at the fovea by appropriate magnification. However, E2 values found for different spatial tasks varied over a 200-fold range. In spatial tasks (e.g. bisection acuity and spatial interval discrimination) E2 values were low, reaching about 0.075 deg, whereas in movement tasks the values could be as high as 16 deg. Using a method of spatial scaling it has been possible to equate foveal and peripheral perfonnance in many diverse visual tasks. The rate at which peripheral stimulus size had to be increased as a function of eccentricity was dependent upon the stimulus conditions and the task itself. Possible reasons for these findings are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

2002 Mathematics Subject Classification: 62P35, 62P30.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fueled by increasing human appetite for high computing performance, semiconductor technology has now marched into the deep sub-micron era. As transistor size keeps shrinking, more and more transistors are integrated into a single chip. This has increased tremendously the power consumption and heat generation of IC chips. The rapidly growing heat dissipation greatly increases the packaging/cooling costs, and adversely affects the performance and reliability of a computing system. In addition, it also reduces the processor's life span and may even crash the entire computing system. Therefore, dynamic thermal management (DTM) is becoming a critical problem in modern computer system design. Extensive theoretical research has been conducted to study the DTM problem. However, most of them are based on theoretically idealized assumptions or simplified models. While these models and assumptions help to greatly simplify a complex problem and make it theoretically manageable, practical computer systems and applications must deal with many practical factors and details beyond these models or assumptions. The goal of our research was to develop a test platform that can be used to validate theoretical results on DTM under well-controlled conditions, to identify the limitations of existing theoretical results, and also to develop new and practical DTM techniques. This dissertation details the background and our research efforts in this endeavor. Specifically, in our research, we first developed a customized test platform based on an Intel desktop. We then tested a number of related theoretical works and examined their limitations under the practical hardware environment. With these limitations in mind, we developed a new reactive thermal management algorithm for single-core computing systems to optimize the throughput under a peak temperature constraint. We further extended our research to a multicore platform and developed an effective proactive DTM technique for throughput maximization on multicore processor based on task migration and dynamic voltage frequency scaling technique. The significance of our research lies in the fact that our research complements the current extensive theoretical research in dealing with increasingly critical thermal problems and enabling the continuous evolution of high performance computing systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study aimed to estimate the frequency, associated factors, and molecular characterisation of Entamoeba histolytica , Entamoeba dispar, Entamoeba moshkovskii , and Entamoeba hartmanni infections. We performed a survey (n = 213 subjects) to obtain parasitological, sanitation, and sociodemographic data. Faecal samples were processed through flotation and centrifugation methods. E. histolytica, E. dispar, E. moshkovskii, and E. hartmanni were identified by nested-polymerase chain reaction (PCR). The overall prevalence of infection was 22/213 (10.3%). The infection rate among subjects who drink rainwater collected from roofs in tanks was higher than the rate in subjects who drink desalinated water pumped from wells; similarly, the infection rate among subjects who practice open defecation was significantly higher than that of subjects with latrines. Out of the 22 samples positive for morphologically indistinguishable Entamoeba species, the differentiation by PCR was successful for 21. The species distribution was as follows: 57.1% to E. dispar, 23.8% to E. histolytica, 14.3% to E. histolytica and E. dispar, and 4.8% E. dispar and E. hartmanni. These data suggest a high prevalence of asymptomatic infection by the group of morphologically indistinguishable Entamoeba histolytica/dispar/moshkovskii complex and E. hartmanni species. In this context of water scarcity, the sanitary and socioenvironmental characteristics of the region appear to favour transmission.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With progressing CMOS technology miniaturization, the leakage power consumption starts to dominate the dynamic power consumption. The recent technology trends have equipped the modern embedded processors with the several sleep states and reduced their overhead (energy/time) of the sleep transition. The dynamic voltage frequency scaling (DVFS) potential to save energy is diminishing due to efficient (low overhead) sleep states and increased static (leakage) power consumption. The state-of-the-art research on static power reduction at system level is based on assumptions that cannot easily be integrated into practical systems. We propose a novel enhanced race-to-halt approach (ERTH) to reduce the overall system energy consumption. The exhaustive simulations demonstrate the effectiveness of our approach showing an improvement of up to 8 % over an existing work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A search for Higgs boson production in association with a W or Z boson, in the H→ W W ∗ decay channel, is performed with a data sample collected with the ATLAS detector at the LHC in proton-proton collisions at centre-of-mass energies s√=7 TeV and 8 TeV, corresponding to integrated luminosities of 4.5 fb−1 and 20.3 fb−1, respectively. The WH production mode is studied in two-lepton and three-lepton final states, while two- lepton and four-lepton final states are used to search for the ZH production mode. The observed significance, for the combined W H and ZH production, is 2.5 standard deviations while a significance of 0.9 standard deviations is expected in the Standard Model Higgs boson hypothesis. The ratio of the combined W H and ZH signal yield to the Standard Model expectation, μ V H , is found to be μ V H  = 3.0 − 1.1 + 1.3 (stat.) − 0.7 + 1.0 (sys.) for the Higgs boson mass of 125.36 GeV. The W H and ZH production modes are also combined with the gluon fusion and vector boson fusion production modes studied in the H → W W ∗ → ℓνℓν decay channel, resulting in an overall observed significance of 6.5 standard deviations and μ ggF + VBF + VH = 1. 16 − 0.15 + 0.16 (stat.) − 0.15 + 0.18 (sys.). The results are interpreted in terms of scaling factors of the Higgs boson couplings to vector bosons (κ V ) and fermions (κ F ); the combined results are: |κ V | = 1.06 − 0.10 + 0.10 , |κ F | = 0. 85 − 0.20 + 0.26 .

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study of pod corn seems still of much importance from different points of view. The phylogenetical importance of the tunicate factor as a wild type relic gene has been recently discussed in much detail by MANGELSDORF and REEVES (1939), and by BRIEGER (1943, 1944a e b). Selection experiments have shown that the pleiotropic effect of the Tu factor can be modified very extensively (BRIEGER 1944a) and some of the forms thus obtained permitt comparison of male and female inflorescences in corn and related grasses. A detailed discussion of the botanical aspect shall be given shortly. The genetic apect, finally, is the subject of the present publication. Pod corn has been obtained twice: São Paulo Pod Corn and Bolivia Pod Corn. The former came from one half ear left in our laboratory by a student and belongs to the type of corn cultivated in the State of São Paulo, while the other belongs to the Andean group, and has been received both through Dr. CARDENAS, President of the University at Cochabamba, Bolivia, and through Dr. H. C. CUTLER, Harvard University, who collected material in the Andes. The results of the studies may be summarized as follows: 1) In both cases, pod corn is characterized by the presence of a dominant Tu factor, localized in the fourth chromosome and linked with sul. The crossover value differs somewhat from the mean value of 29% given by EMERSON, BEADLE and FRAZER (1935) and was 25% in 1217 plants for São Paulo Pod Corn and 36,5% in 345 plants for Bolivia Pod Corn. However not much importance should be attributed to the quantitative differences. 2) Segregation was completely normal in Bolivia Pod Corn while São Paulo Pod Corn proved to be heterozygous for a new com uma eliminação forte, funcionam apenas 8% em vez de 50%. Existem cerca de 30% de "jcrossing-over entre o gen doce (Su/su) e o fator gametofítico; è cerca de 5% entre o gen Tu e o fator gametofítico. A ordem dos gens no cromosômio IV é: Ga4 - Tu - Sul. 3) Using BRIEGER'S formulas (1930, 1937a, 1937b) the following determinations were made. a) the elimination of ga4 pollen tubes may be strong or weak. In the former case only about 8% and in the latter 37% of ga4 pollen tubes function, instead of the 50% expected in normal heterozygotes. b) There is about 30,4% crossing-over between sul and ga4 and 5,3% between Tu and ga3, the order of the factors beeing Su 1 - Tu - Ga4. 4) The new gametophyte factor differs from the two others factors in the same chromosome, causing competition between pollen tubes. The factor Gal, ocupies another locus, considerably to the left of Sul (EMERSON, BEADLE AND FRAZSER, 1935). The gen spl ocupies another locus and causes a difference of the size of the pollen grains, besides an elimination of pollen tubes, while no such differences were observed in the case of the new factor Ga4. 5) It may be mentioned, without entering into a detailed discussion, that it seems remarquable that three of the few gametophyte factors, so far studied in detail are localized in chromosome four. Actuality there are a few more known (BRIEGER, TIDBURY AND TSENG 1938), but only one other has been localized so far, Ga2, in chromosome five between btl and prl. (BRIEGER, 1935). 6) The fourth chromosome of corn seems to contain other pecularities still. MANGELSDORF AND REEVES (1939) concluded that it carries two translocations from Tripsacum chromosomes, and BRIEGER (1944b) suggested that the tu allel may have been introduced from a tripsacoid ancestor in substitution of the wild type gene Tu at the beginning of domestication. Serious disturbances in the segregation of fourth chromosome factors have been observed (BRIEGER, unpublished) in the hybrids of Brazilian corn and Mexican teosinte, caused by gametophytic and possibly zygotic elimination. Future studies must show wether there is any relation between the frequency of factors, causing gametophyte elimination and the presence of regions of chromosomes, tranfered either from Tripsacum or a related species, by translocation or crossing-over.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El consumo energético es un aspecto cada vez más importante en el diseño de microprocesadores. Este trabajo experimenta con una técnica de control del consumo, el escalado dinámico de tensión y frecuencia (DVFS, siglas en inglés), para determinar cuan efectiva es la misma en la ejecución de programas con diferentes cargas de trabajo, intensivas en cómputo o memoria. Además, se ha extendido la experimentación a varios núcleos de ejecución, permitiendo comprobar en que medida las características de la ejecución en una arquitectura multicore afecta al desempeño de dicha técnica.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tässä työssä esiteltiin Android laitteisto- ja sovellusalustana sekä kuvattiin, kuinka Android-pelisovelluksen käyttöliittymä voidaan pitää yhtenäisenä eri näyttölaitteilla skaalauskertoimien ja ankkuroinnin avulla. Toisena osiona työtä käsiteltiin yksinkertaisia tapoja, joilla pelisovelluksien suorituskykyä voidaan parantaa. Näistä tarkempiin mittauksiin valittiin matalatarkkuuksinen piirtopuskuri ja näkymättömissä olevien kappaleiden piilotus. Mittauksissa valitut menetelmät vaikuttivat demosovelluksen suorituskykyyn huomattavasti. Tässä työssä rajauduttiin Android-ohjelmointiin Java-kielellä ilman ulkoisia kirjastoja, jolloin työn tuloksia voi helposti hyödyntää mahdollisimman monessa eri käyttökohteessa.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the recent years, the unpredictable growth of the Internet has moreover pointed out the congestion problem, one of the problems that historicallyha ve affected the network. This paper deals with the design and the evaluation of a congestion control algorithm which adopts a FuzzyCon troller. The analogyb etween Proportional Integral (PI) regulators and Fuzzycon trollers is discussed and a method to determine the scaling factors of the Fuzzycon troller is presented. It is shown that the Fuzzycon troller outperforms the PI under traffic conditions which are different from those related to the operating point considered in the design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To optimise the placement of small wind turbines in urban areas a detailed understanding of the spatial variability of the wind resource is required. At present, due to a lack of observations, the NOABL wind speed database is frequently used to estimate the wind resource at a potential site. However, recent work has shown that this tends to overestimate the wind speed in urban areas. This paper suggests a method for adjusting the predictions of the NOABL in urban areas by considering the impact of the underlying surface on a neighbourhood scale. In which, the nature of the surface is characterised on a 1 km2 resolution using an urban morphology database. The model was then used to estimate the variability of the annual mean wind speed across Greater London at a height typical of current small wind turbine installations. Initial validation of the results suggests that the predicted wind speeds are considerably more accurate than the NOABL values. The derived wind map therefore currently provides the best opportunity to identify the neighbourhoods in Greater London at which small wind turbines yield their highest energy production. The model does not consider street scale processes, however previously derived scaling factors can be applied to relate the neighbourhood wind speed to a value at a specific rooftop site. The results showed that the wind speed predicted across London is relatively low, exceeding 4 ms-1 at only 27% of the neighbourhoods in the city. Of these sites less than 10% are within 10 km of the city centre, with the majority over 20 km from the city centre. Consequently, it is predicted that small wind turbines tend to perform better towards the outskirts of the city, therefore for cities which fit the Burgess concentric ring model, such as Greater London, ‘distance from city centre’ is a useful parameter for siting small wind turbines. However, there are a number of neighbourhoods close to the city centre at which the wind speed is relatively high and these sites can only been identified with a detailed representation of the urban surface, such as that developed in this study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Four new diorganotin(IV) complexes have been prepared from R(2)SnCl(2) (R = Me, Ph) with the ligands 5-hydroxy-3-metyl-5-phenyl-1-(S-benzildithiocarbazate)-pyrazoline (H(2)L(1)) and 5-hydroxy-3-methyl-5-phenyl-1-(2-thiophenecarboxylic)-pyrazoline (H(2)L(2)). The complexes were characterized by elemental analysis, IR. (1)H (13)C, (119)Sn NMR and Mossbauer spectroscopes The complexes [Me(2)SnL(1)], [Ph(2)SnL(1)] and [Me(2)SnL(2)] were also studied by single crystal X-ray diffraction and the results showed that the Sn(IV) central atom of the complexes adopts a distorted trigonal bipyramidal (TBP) geometry with the N atom of the ONX-tridentate (X = O and S) ligand and two organic groups occupying equatorial sites. The C-Sn-C angles for [Me(2)Sn(L(1))] and [Ph(2)Sn(L(1))] were calculated using a correlation between (119)Sn Mossbauer and X-ray crystallographic data based on the point-charge model Theoretical calculations were performed with the B3LYP density functional employing 3-21G(*) and DZVP all electron basis sets showing good agreement with experimental findings General and Sn(IV) specific IR harmonic frequency scale factors for both basis sets were obtained from comparison with selected experimental frequencies (C) 2010 Elsevier B V All rights reserved