934 resultados para Optimizing Compilation
Resumo:
Dry seeding of aman rice can facilitate timely crop establishment and early harvest and thus help to alleviate the monga (hunger) period in the High Ganges Flood Plain of Bangladesh. Dry seeding also offers many other potential benefits, including reduced cost of crop establishment and improved soil structure for crops grown in rotation with rice. However, the optimum time for seeding in areas where farmers have access to water for supplementary irrigation has not been determined. We hypothesized that earlier sowing is safer, and that increasing seed rate mitigates the adverse effects of significant rain after sowing on establishment and crop performance. To test these hypotheses, we analyzed long term rainfall data, and conducted field experiments on the effects of sowing date (target dates of 25 May, 10 June, 25 June, and 10 July) and seed rate (20, 40, and 60 kg ha−1) on crop establishment, growth, and yield of dry seeded Binadhan-7 (short duration, 110–120 d) during the 2012 and 2013 rainy seasons. Wet soil as a result of untimely rainfall usually prevented sowing on the last two target dates in both years, but not on the first two dates. Rainfall analysis also suggested a high probability of being able to dry seed in late May/early June, and a low probability of being able to dry seed in late June/early July. Delaying sowing from 25 May/10 June to late June/early July usually resulted in 20–25% lower plant density and lower uniformity of the plant stand as a result of rain shortly after sowing. Delaying sowing also reduced crop duration, and tillering or biomass production when using a low seed rate. For the late June/early July sowings, there was a strong positive relationship between plant density and yield, but this was not the case for earlier sowings. Thus, increasing seed rate compensated for the adverse effect of untimely rains after sowing on plant density and the shorter growth duration of the late sown crops. The results indicate that in this region, the optimum date for sowing dry seeded rice is late May to early June with a seed rate of 40 kg ha−1. Planting can be delayed to late June/early July with no yield loss using a seed rate of 60 kg ha−1, but in many years, the soil is simply too wet to be able to dry seed at this time due to rainfall.
Resumo:
Mechanical hill direct seeding of hybrid rice could be the way to solve the problems of high seeding rates and uneven plant establishment now faced in direct seeded rice; however, it is not clear what the optimum hill seeding density should be for high-yielding hybrid rice in the single-season rice production system. Experiments were conducted in 2010 and 2011 to determine the effects of hill seeding density (25 cm 615 cm, 25 cm 617 cm, 25 cm 619 cm, 25 cm 621 cm, and 25 cm 623 cm; three to five seeds per hill) on plant growth and grain yield of a hybrid variety, Nei2you6, in two fields with different fertility (soil fertility 1 and 2). In addition, in 2012 and 2013, comparisons among mechanical hill seeding, broadcasting, and transplanting were conducted with three hybrid varieties to evaluate the optimum seeding density. With increases in seeding spacing from 25 cm615 cm to 25 cm623 cm, productive tillers per hill increased by 34.2% and 50.0% in soil fertility 1 and 2. Panicles per m2 declined with increases in seeding spacing in soil fertility 1. In soil fertility 2, no difference in panicles per m2 was found at spacing ranging from 25 cm617 cm to 25 cm623 cm, while decreases in the area of the top three leaves and aboveground dry weight per shoot at flowering were observed. Grain yield was the maximum at 25 cm 617 cm spacing in both soil fertility fields. Our results suggest that a seeding density of 25 cm617 cm was suitable for high-yielding hybrid rice. These results were verified through on-farm demonstration experiments, in which mechanical hill-seeded rice at this density had equal or higher grain yield than transplanted rice
Resumo:
Standard mechanism inhibitors are attractive design templates for engineering reversible serine protease inhibitors. When optimizing interactions between the inhibitor and target protease, many studies focus on the nonprimed segment of the inhibitor's binding loop (encompassing the contact β-strand). However, there are currently few methods for screening residues on the primed segment. Here, we designed a synthetic inhibitor library (based on sunflower trypsin inhibitor-1) for characterizing the P2′ specificity of various serine proteases. Screening the library against 13 different proteases revealed unique P2′ preferences for trypsin, chymotrypsin, matriptase, plasmin, thrombin, four kallikrein-related peptidases, and several clotting factors. Using this information to modify existing engineered inhibitors yielded new variants that showed considerably improved selectivity, reaching up to 7000-fold selectivity over certain off-target proteases. Our study demonstrates the importance of the P2′ residue in standard mechanism inhibition and unveils a new approach for screening P2′ substitutions that will benefit future inhibitor engineering studies.
Resumo:
An estimate of the groundwater budget at the catchment scale is extremely important for the sustainable management of available water resources. Water resources are generally subjected to over-exploitation for agricultural and domestic purposes in agrarian economies like India. The double water-table fluctuation method is a reliable method for calculating the water budget in semi-arid crystalline rock areas. Extensive measurements of water levels from a dense network before and after the monsoon rainfall were made in a 53 km(2)atershed in southern India and various components of the water balance were then calculated. Later, water level data underwent geostatistical analyses to determine the priority and/or redundancy of each measurement point using a cross-validation method. An optimal network evolved from these analyses. The network was then used in re-calculation of the water-balance components. It was established that such an optimized network provides far fewer measurement points without considerably changing the conclusions regarding groundwater budget. This exercise is helpful in reducing the time and expenditure involved in exhaustive piezometric surveys and also in determining the water budget for large watersheds (watersheds greater than 50 km(2)).
Resumo:
XVIII IUFRO World Congress, Ljubljana 1986.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
In this paper we model a scenario where a ship uses decoys to evade a hostile torpedo. We address the problem of enhancing ship survivability against enemy torpedoes by using single and multiple decoy deployments. We incorporate deterministic ship maneuvers and realistic constraints on turn rates, field of view, etc in the model. We formulate the objective function to quantify and maximize the survivability of the ship in terms of maximizing the intercept time. We introduce the concept of optimal deployment regions, same side deployment, and zig-zag deployment strategies. Finally, we present simulation results.
Resumo:
The resolution of the digital signal path has a crucial impact on the design, performance and the power dissipation of the radio receiver data path, downstream from the ADC. The ADC quantization noise has been traditionally included with the Front End receiver noise in calculating the SNR as well as BER for the receiver. Using the IEEE 802.15.4 as an example, we show that this approach leads to an over-design for the ADC and the digital signal path, resulting in larger power. More accurate specifications for the front-end design can be obtained by making SNRreg a function of signal resolutions. We show that lower resolution signals provide adequate performance and quantization noise alone does not produce any bit-error. We find that a tight bandpass filter preceding the ADC can relax the resolution requirement and a 1-bit ADC degrades SNR by only 1.35 dB compared to 8-bit ADC. Signal resolution has a larger impact on the synchronization and a 1-bit ADC costs about 5 dB in SNR to maintain the same level of performance as a 8-bit ADC.
Resumo:
In this paper, we study how TCP and UDP flows interact with each other when the end system is a CPU resource constrained thin client. The problem addressed is twofold, 1) the throughput of TCP flows degrades severely in the presence of heavily loaded UDP flows 2) fairness and minimum QoS requirements of UDP are not maintained. First, we identify the factors affecting the TCP throughput by providing an in-depth analysis of end to end delay and packet loss variations. The results obtained from the first part leads us to our second contribution. We propose and study the use of an algorithm that ensures fairness across flows. The algorithm improves the performance of TCP flows in the presence of multiple UDP flows admitted under an admission algorithm and maintains the minimum QoS requirements of the UDP flows. The advantage of the algorithm is that it requires no changes to TCP/IP stack and control is achieved through receiver window control.
Resumo:
We consider the classical problem of sequential detection of change in a distribution (from hypothesis 0 to hypothesis 1), where the fusion centre receives vectors of periodic measurements, with the measurements being i.i.d. over time and across the vector components, under each of the two hypotheses. In our problem, the sensor devices ("motes") that generate the measurements constitute an ad hoc wireless network. The motes contend using a random access protocol (such as CSMA/CA) to transmit their measurement packets to the fusion centre. The fusion centre waits for vectors of measurements to accumulate before taking decisions. We formulate the optimal detection problem, taking into account the network delay experienced by the vectors of measurements, and find that, under periodic sampling, the detection delay decouples into network delay and decision delay. We obtain a lower bound on the network delay, and propose a censoring scheme, where lagging sensors drop their delayed observations in order to mitigate network delay. We show that this scheme can achieve the lower bound. This approach is explored via simulation. We also use numerical evaluation and simulation to study issues such as: the optimal sampling rate for a given number of sensors, and the optimal number of sensors for a given measurement rate
Resumo:
In a dense multi-hop network of mobile nodes capable of applying adaptive power control, we consider the problem of finding the optimal hop distance that maximizes a certain throughput measure in bit-metres/sec, subject to average network power constraints. The mobility of nodes is restricted to a circular periphery area centered at the nominal location of nodes. We incorporate only randomly varying path-loss characteristics of channel gain due to the random motion of nodes, excluding any multi-path fading or shadowing effects. Computation of the throughput metric in such a scenario leads us to compute the probability density function of random distance between points in two circles. Using numerical analysis we discover that choosing the nearest node as next hop is not always optimal. Optimal throughput performance is also attained at non-trivial hop distances depending on the available average network power.
Resumo:
The β-phase of polyvinylidene fluoride (PVDF) is well known for its piezoelectric properties. PVDF films have been developed using solvent cast method. The films thus produced are in α-phase. The α-phase is transformed to piezoelectric β-phase when the film is hot-stretched with various different stretching factors at various different temperatures. The films are then characterized in terms of their mechanical properties and surface morphological changes during the transformation from α- to β-phases by using X-ray diffraction, differential scanning calorimeter, Raman spectra, Infrared spectra, tensile testing, and scanning electron microscopy. The films showed increased crystallinity with stretching at temperature up to 80°C. The optimum conditions to achieve β-phase have been discussed in detail. The fabricated PVDF sensors have been tested for free vibration and impact on plate structure, and its response is compared with conventional piezoelectric wafer type sensor. The resonant and antiresonant peaks in the frequency response of PVDF sensor match well with that of lead zirconate titanate wafer sensors. Effective piezoelectric properties and the variations in the frequency response spectra due to free vibration and impact loading conditions are reported. POLYM. ENG. SCI., 2012. © 2012 Society of Plastics Engineers.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.