969 resultados para High level architecture


Relevância:

90.00% 90.00%

Publicador:

Resumo:

An experiment using herds of similar to 20 cows (farmlets) assessed the effects of high stocking rates on production and profitability of feeding systems based on dryland and irrigated perennial ryegrass-based pastures in a Mediterranean environment in South Australia over 4 years. A target level of milk production of 7000 L/cow.year was set, based on predicted intakes of 2.7 t DM/cow.year as concentrates, pasture intakes from 1.5 to 2.7 t/cow.year and purchased fodder. In years 1 and 2, up to 1.5 t DM/cow.year of purchased fodder was used and in years 3 and 4 the amounts were increased if necessary to enable levels of milk production per cow to be maintained at target levels. Cows in dryland farmlets calved in March to May inclusive and were stocked at 2.5, 2.9, 3.3, 3.6 and 4.1 cows/ha, while those in irrigated farmlets calved in August to October inclusive and were stocked at 4.1, 5.2, 6.3 and 7.4 cows/ha. In the first 2 years, when inputs of purchased fodder were limited, milk production per cow was reduced with higher stocking rates (P < 0.01), but in years 3 and 4 there were no differences. Mean production was 7149 kg/cow.year in years 1 and 2, and 8162 kg/cow.year in years 3 and 4. Production per hectare was very closely related to stocking rate in all years (P < 0.01), increasing from 18 to 34 t milk/ha.year for dryland farmlets (1300 to 2200 kg milk solids/ha) and from 30 to 60 t milk/ha.year for irrigated farmlets (2200 to 4100 kg milk solids/ha). Almost all of these increases were attributed to the increases in grain and purchased fodder inputs associated with the increases in stocking rate. Net pasture accumulation rates and pasture harvest were generally not altered with stocking rate, though as stocking rate increased there was a change to more of the pasture being grazed and less conserved in both dryland and irrigated farmlets. Total pasture harvest averaged similar to 8 and 14 t DM/ha.year for dryland and irrigated pastures, respectively. An exception was at the highest stocking rate under irrigation, where pugging during winter was associated with a 14% reduction in annual pasture growth. There were several indications that these high stocking rates may not be sustainable without substantial changes in management practice. There were large and positive nutrient balances and associated increases in soil mineral content (P < 0.01), especially for phosphorus and nitrate nitrogen, with both stocking rate and succeeding years. Levels under irrigation were considerably higher (up to 90 and 240 mg/kg of soil for nitrate nitrogen and phosphorus, respectively) than under dryland pastures (60 and 140 mg/kg, respectively). Soil organic carbon levels did not change with stocking rate, indicating a high level of utilisation of forage grown. Weed ingress was also high (to 22% DM) in all treatments and especially in heavily stocked irrigated pastures during winter. It was concluded the higher stocking rates used exceeded those that are feasible for Mediterranean pastures in this environment and upper levels of stocking are suggested to be 2.5 cows/ha for dryland pastures and 5.2 cows/ha for irrigated pastures. To sustain these suggested stocking rates will require further development of management practices to avoid large increases in soil minerals and weed invasion of pastures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The immediate effects of two human-related vegetation disturbances, (1) green tree retention (GTR) patch felling and scarification by harrowing and (2) experimental understorey vegetation layer removal, were examined in boreal forest stands in Finland. Effects of GTR patch felling and scarification on tree uprootings, on coarse woody debris (CWD) and on epixylic plant community were followed in upland and in paludified forest types. Uprootings increased considerably during 2-3 years after the fellings and were more frequent (47%) in the paludified than in the upland forest (13%). Scarification reduced 68% of the CWD in the felling area. Cover and especially species richness of epixylics declined in the both areas during 1-2 years after the felling. The increasing size of GTR patch correlated positively with the species richness. Regeneration of understorey vegetation community and Vaccinium myrtillus and Vaccinium vitis-idaea after different removals of vegetation layers in an old-growth forest took four years. The regeneration occurred mainly by vegetative means and it was faster in the terms of species richness than in the cover. In the most severe treatment, recovery occurred merely by sexual reproduction. V. myrtillus recovered mainly by producing new shoots. V. vitis-idaea recovered faster than V. myrtillus, mainly by increasing length growth. For ecological reasons, use of larger GTR patches on paludified biotope would be recommendable. In felling areas, scarification by harrowing could be replaced with some other spot-wise method. After moderate intensity level disturbance, recovery occurs rapidly by vegetative regrowth of the dominating species. High level of intensity may prevent the recovery of vegetation community for years, while enabling also the genetic regeneration of the initial species. Local anthropogenic-related disturbances are currently increasing and they can interact during temporally short times, which should be taken in to account in the future forest management plans.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multi-agent systems (MAS) advocate an agent-based approach to software engineering based on decomposing problems in terms of decentralized, autonomous agents that can engage in flexible, high-level interactions. This chapter introduces scalable fault tolerant agent grooming environment (SAGE), a second-generation Foundation for Intelligent Physical Agents (FIPA)-compliant multi-agent system developed at NIIT-Comtec, which provides an environment for creating distributed, intelligent, and autonomous entities that are encapsulated as agents. The chapter focuses on the highlight of SAGE, which is its decentralized fault-tolerant architecture that can be used to develop applications in a number of areas such as e-health, e-government, and e-science. In addition, SAGE architecture provides tools for runtime agent management, directory facilitation, monitoring, and editing messages exchange between agents. SAGE also provides a built-in mechanism to program agent behavior and their capabilities with the help of its autonomous agent architecture, which is the other major highlight of this chapter. The authors believe that the market for agent-based applications is growing rapidly, and SAGE can play a crucial role for future intelligent applications development. © 2007, IGI Global.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Prior work on modeling interconnects has focused on optimizing the wire and repeater design for trading off energy and delay, and is largely based on low level circuit parameters. Hence these models are hard to use directly to make high level microarchitectural trade-offs in the initial exploration phase of a design. In this paper, we propose INTACTE, a tool that can be used by architects toget reasonably accurate interconnect area, delay, and power estimates based on a few architecture level parameters for the interconnect such as length, width (in number of bits), frequency, and latency for a specified technology and voltage. The tool uses well known models of interconnect delay and energy taking into account the wire pitch, repeater size, and spacing for a range of voltages and technologies.It then solves an optimization problem of finding the lowest energy interconnect design in terms of the low level circuit parameters, which meets the architectural constraintsgiven as inputs. In addition, the tool also provides the area, energy, and delay for a range of supply voltages and degrees of pipelining, which can be used for micro-architectural exploration of a chip. The delay and energy models used by the tool have been validated against low level circuit simulations. We discuss several potential applications of the tool and present an example of optimizing interconnect design in the context of clustered VLIW architectures. Copyright 2007 ACM.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Instruction reuse is a microarchitectural technique that improves the execution time of a program by removing redundant computations at run-time. Although this is the job of an optimizing compiler, they do not succeed many a time due to limited knowledge of run-time data. In this paper we examine instruction reuse of integer ALU and load instructions in network processing applications. Specifically, this paper attempts to answer the following questions: (1) How much of instruction reuse is inherent in network processing applications?, (2) Can reuse be improved by reducing interference in the reuse buffer?, (3) What characteristics of network applications can be exploited to improve reuse?, and (4) What is the effect of reuse on resource contention and memory accesses? We propose an aggregation scheme that combines the high-level concept of network traffic i.e. "flows" with a low level microarchitectural feature of programs i.e. repetition of instructions and data along with an architecture that exploits temporal locality in incoming packet data to improve reuse. We find that for the benchmarks considered, 1% to 50% of instructions are reused while the speedup achieved varies between 1% and 24%. As a side effect, instruction reuse reduces memory traffic and can therefore be considered as a scheme for low power.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Noise measurements from 140°K to 350°K ambient temperature and between 10kHz and 22MHz performed on a double injection silicon diode as a function of operating point indicate that the high frequency noise depends linearly on the ambient temperature T and on the differential conductance g measured at the same frequency. The noise is represented quantitatively by〈i^2〉 = α•4kTgΔf. A new interpretation demands Nyquist noise with α ≡ 1 in these devices at high frequencies. This is in accord with an equivalent circuit derived for the double injection process. The effects of diode geometry on the static I-V characteristic as well as on the ac properties are illustrated. Investigation of the temperature dependence of double injection yields measurements of the temperature variation of the common high-level lifetime τ(τ ∝ T^2), the hole conductivity mobility µ_p (µ_p ∝ T^(-2.18)) and the electron conductivity mobility µ_n(µ_n ∝ T^(-1.75)).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A study was conducted among fifty women fish vendors in Kancheepuram and Chennai districts to determine the factors influencing the livelihood index and level of aspiration. The independent variables such as annual income, scientific orientation, expenditure per year and savings per year were found to have highest factor loadings on livelihood index and level of aspiration of fisherwomen. Besides most of the fisherwomen had a high level (score of <50) of livelihood index and a high level (score greater than 13) of aspiration.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although the peritrichous ciliate Carchesium polypinum is common in freshwater, its population genetic structure is largely unknown. We used inter-simple sequence repeat (ISSR) fingerprinting to analyze the genetic structure of 48 different isolates of the species from four lakes in Wuhan, central China. Using eight polymorphic primers, 81 discernible DNA fragments were detected, among which 76 (93.83%) were polymorphic, indicating high genetic diversity at the isolate level. Further, Nei's gene diversity (h) and Shannon's Information index (I) between the different isolates both revealed a remarkable genetic diversity, higher than previously indicated by their morphology. At the same time, substantial gene flow was found. So the main factors responsible for the high level of diversity within populations are probably due to conjugation (sexual reproduction) and wide distribution of swarmers. Analysis of molecular variance (AMOVA) showed that there was low genetic differentiation among the four populations probably due to common ancestry and flooding events. The cluster analysis and principal component analysis (PCA) suggested that genotypes isolated from the same lake displayed a higher genetic similarity than those from different lakes. Both analyses separated C. polypinum isolates into subgroups according to the geographical locations. However, there is only a weak positive correlation between the genetic distance and geographical distance, suggesting a minor effect of geographical distance on the distribution of genetic diversity between populations of C. polypinum at the local level. In conclusion, our studies clearly demonstrated that a single morphospecies may harbor high levels of genetic diversity, and that the degree of resolution offered by morphology as a marker for measuring distribution patterns of genetically distinct entities is too low.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A programmable vision chip for real-time vision applications is presented. The chip architecture is a combination of a SIMD processing element array and row-parallel processors, which can perform pixel-parallel and row-parallel operations at high speed. It implements the mathematical morphology method to carry out low-level and mid-level image processing and sends out image features for high-level image processing without I/O bottleneck. The chip can perform many algorithms through software control. The simulated maximum frequency of the vision chip is 300 MHz with 16 x 16 pixels resolution. It achieves the rate of 1000 frames per second in real-time vision. A prototype chip with a 16 x 16 PE array is fabricated by the 0.18 mu m standard CMOS process. It has a pixel size of 30 mu m x 40 mu m and 8.72 mW power consumption with a 1.8 V power supply. Experiments including the mathematical morphology method and target tracking application demonstrated that the chip is fully functional and can be applied in real-time vision applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Branched polystyrenes with abundant pendant vinyl functional groups were prepared via radical polymerization of an asymmetric divinyl monomer, which possesses a higher reactive styryl and a lower reactive butenyl. Employing a fast reversible addition fragmentation chain transfer (RAFT) equilibrium, the concentration of active propagation chains remained at a low value and thus crosslinking did not occur until a high level of monomer conversion. The combination of a higher reaction temperature (120 degrees C) and RAFT agent cumyl dithiobenzoate was demonstrated to be optimal for providing both a more highly branched architecture and a higher polymer yield.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Branched polyacrylonitriles were prepared via the one-pot radical copolymerization of acrylonitirle and an asymmetric divinyl monomer (allyl methacrylate) that possesses both a higher reactive methacrylate and a lower reactive allyl. RAFT technique was used to keep a low-propagation chain concentration via a fast reversible chain transfer euilibration and thus the cross-linking was prevented until a high level of monomer conversions. This novel strategy was demonstrated to engenerate a branched architecture with abundant pendant functional vinyl and nitrile groups, and controlled molecular weight as a behavior of controlled/living radical polymerization characteristics. The effect of the various experimental parameters, including temperature, brancher to monomer molar ratio, and chain transfer agent to initiator molar ratio, on the control Of moleculer dimension (molecular weight and polydispersity indices) and the degree of branching were investigated in detail. Moreover, H-1 NMR and gel permeation chromatography confirm the branched architecture of the resultant polymer. The intrinsic viscosity of the copolymer is also lower than the linear counterpart.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

随着电子技术和计算机技术的不断发展,工业生产过程的控制系统正在向着智能化、数字化和网络化的方向发展。传统的集散控制方式和计算机分层控制方式已经开始让位于智能终端与网络结合的总线网络控制方式。当今,在工厂中过程控制环境下的分布式自动化系统变得越来越复杂,尤其系统内部的各设备之间需要快速交换大量的信息,以便实现对被控系统更为精确的控制和提供一些辅助的评价函数。这就意味着要不断增加带宽和提高通信速率以满足网络通信的需要。在现有的多种可利用网络设备中,CAN总线以其清晰的定义、极高的可靠性及其独特的设计,被认为是最能有效地解决这一问题的途径之一。而且市场上基于通信技术的产品中,就实时性考虑,由于CAN总线采用的非表意性的通信方式,因此其结构更为简单,实时性更好。基于此背景,我们以CAN总线作为通信媒介,将分布于各控制现场的传感器、执行器和控制器有序地连接起来,构成了一个基于CAN总线的分布式局域网络控制系统。本文首先介绍了基于CAN总线的分布式数据采集与控制系统的总体结构。然后从硬件方面描述了基于CAN总线的通信协议转换单元、数据采集单元和输出控制单元的功能、硬件配置及各单元功能的具体实现过程,给出了各单元的性能指标。软件方面,以C语言作为平台,开发了基于CAN总线的上位计算机管理与监控软件,实现了对整个网络设备的系统管理和系统控制功能。对于该总线系统,作者运用了PID控制和模糊控制算法实现了对水箱液位的控制,达到了理想的效果。基于CAN总线的控制系统很好地解决了集散控制系统难以解决的难题,模糊控制的应用能很好地把总线控制系统应用到具有非线性、大时滞和难于获得精确模型的控制系统中。

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Numerical modeling of groundwater is very important for understanding groundwater flow and solving hydrogeological problem. Today, groundwater studies require massive model cells and high calculation accuracy, which are beyond single-CPU computer’s capabilities. With the development of high performance parallel computing technologies, application of parallel computing method on numerical modeling of groundwater flow becomes necessary and important. Using parallel computing can improve the ability to resolve various hydro-geological and environmental problems. In this study, parallel computing method on two main types of modern parallel computer architecture, shared memory parallel systems and distributed shared memory parallel systems, are discussed. OpenMP and MPI (PETSc) are both used to parallelize the most widely used groundwater simulator, MODFLOW. Two parallel solvers, P-PCG and P-MODFLOW, were developed for MODFLOW. The parallelized MODFLOW was used to simulate regional groundwater flow in Beishan, Gansu Province, which is a potential high-level radioactive waste geological disposal area in China. 1. The OpenMP programming paradigm was used to parallelize the PCG (preconditioned conjugate-gradient method) solver, which is one of the main solver for MODFLOW. The parallel PCG solver, P-PCG, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. The largest test model has 1000 columns, 1000 rows and 1000 layers. Based on the timing results, execution times using the P-PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. 2. P-MODFLOW, a domain decomposition–based model implemented in a parallel computing environment is developed, which allows efficient simulation of a regional-scale groundwater flow. The basic approach partitions a large model domain into any number of sub-domains. Parallel processors are used to solve the model equations within each sub-domain. The use of domain decomposition method to achieve the MODFLOW program distributed shared memory parallel computing system will process the application of MODFLOW be extended to the fleet of the most popular systems, so that a large-scale simulation could take full advantage of hundreds or even thousands parallel processors. P-MODFLOW has a good parallel performance, with the maximum speedup of 18.32 (14 processors). Super linear speedups have been achieved in the parallel tests, indicating the efficiency and scalability of the code. Parallel program design, load balancing and full use of the PETSc were considered to achieve a highly efficient parallel program. 3. The characterization of regional ground water flow system is very important for high-level radioactive waste geological disposal. The Beishan area, located in northwestern Gansu Province, China, is selected as a potential site for disposal repository. The area includes about 80000 km2 and has complicated hydrogeological conditions, which greatly increase the computational effort of regional ground water flow models. In order to reduce computing time, parallel computing scheme was applied to regional ground water flow modeling. Models with over 10 million cells were used to simulate how the faults and different recharge conditions impact regional ground water flow pattern. The results of this study provide regional ground water flow information for the site characterization of the potential high-level radioactive waste disposal.