908 resultados para Multilevel inverter
Resumo:
提出了一种基于多级安全数据库管理系统的通用审计策略模型.该模型具有丰富的表达能力,既可以表达基于时间的审计策略,也可以实现基于规则的审计策略推衍.通过引入对象的属性谓词,还可以表达细粒度的审计策略.证明了该模型的可判定性,并给出了判定任意一个事件是否需要审计的算法.
Resumo:
提出一种改进的数据求精规则,并用关系模式进行描述。引入全局状态来描述程序所有可能的输入和输出,允许非平凡的初始化,允许前向模拟和后向模拟,能应用于消除具体模型的不确定性晚于消除抽象模型的不确定性的情况。并用实例说明了在Isabelle定理证明器中规则的应用方法。
Resumo:
超图划分应用于大规模矩阵计算、大规模集成电路等领域.详细地阐述了超图多级划分的算法框架,并提出对划分结果进行优化的一种手段,通过进行多阶段的循环优化,在可以接受的运行时间内得到对超图的一个较优的划分.
Resumo:
针对现有移动机器人路径规划中各种环境建模方法存在使用范围有限、复杂问题处理能力不足和运行效率低、缺乏灵活性等问题,结合二维半描述和知识利用原理,提出了一种可以比较圆满地解决诸如建筑物内不同楼层连接、室内室外环境交替出现等实用结构性空间问题的基于区域分割的多级环境建模方法。
Resumo:
在研究快速傅里叶变换(FFT)算法的基础上,根据FPGA性能高、灵活性强、速度快的特点,提出了高效的基4-FFT处理器的实现方法。数据存储采用分块存储的方法,大大提高了存取速度。数据寻址采用新型的地址产生方法,可并行产生所需数据地址。同时,在蝶形单元的设计中很好的将并行运算技术和流水线技术相结合了起来,又进一步提高了运算速度。测试结果表明,时钟在50MHz时完成1024点FFT的时间为25.6μs,满足了应用实时性的要求。
Resumo:
Aim at the variousness and complexity of the spatial distribution of Remaining Oil in the fluvial and delta facies reservoir in paper. For example, in the La-Sa-Xing oilfield of Daqing, based on the research of the control factor and formation mechanization of block, single layer, interlayer and micromechanism, synthesizing the theories and methods of geology, well logging, reservoir engineering, artificial intelligence, physical simulation test , and computer multidisciplinary; Fully utilizing the material of geology, well logging, core well, dynamic monitor of oil and water well, and experimental analysis, from macro to micro, from quality to quantity, from indoor to workplace, we predicted the potentiality and distribution according to the four levels of Block, single layer, interlayer and micromechanism, and comprehensively summarized the different distribution pattern of remaining oil in the fluvial and delta facies reservoir This paper puts forward an efficient method to predict the remaining recoverable reserves by using the water flooding characteristic curve differential method and neutral network; for the first time utilizes multilevel fuzzy comprehensive judgment method and expert neutral network technology to predict the remaining oil distribution in the single layer? comprehensively takes advantage of reservoir flowing unit, indoor physical simulation test, inspection well core analysis and well-logging watered-out layer interpretation to efficiently predict the distribution of remaining oil; makes use of core analysis of different periods and indoor water driving oil test to study the micro distribution of remaining oil and the parameters varying law of reservoir substance properties, rock properties, wetting properties. Based on above, the remaining oil distribution predicting software is developed, which contains four levels of block, single layer, interlayer and micromechanism. This achievement has been used inLa-Sa-Xing oil field of Daqing and good results have been received.
Resumo:
As the foundation of other human resource practices, job analysis plays an essential role in HR management. Exploring sources of variance in job analysis ratings given by incumbents from the same job is of much significance to HRM practices. It can also shed lights on employee motivation in organizations. But previous studies in job analysis field have usually been conducted at individual level and take variance in job analysis ratings given by incumbents of the same job as error or bias. This dissertation takes the position that the variance may be meaningful based on role theory and other relevant theories. It first reviewed pervious studies on factors which may influence job analysis ratings provided by incumbents of the same job, and then investigated individual, interpersonal and organizational level variables which may exert impacts on these job analysis ratings, using multilevel data from 8 jobs of 1124 incumbents. The major findings are as follows: 1) Level of job performance and job attitudes affect incumbents’ job analysis ratings by incumbents of the same job at individual level. Specifically, incumbents with high level of job performance rated their job require higher levels of technical skills (power plant designers), and regarded information processing activities as more important to their job (book editors). Regarding the effects of job attitudes, incumbents of the four jobs with high level of job satisfaction gave higher importance and level ratings on organizational and cognitive skills, as well as higher level ratings on technical skills. Further, incumbents with higher affective commitment provided higher importance and level ratings of cognitive skills. Lastly, more involved job incumbents perceived organizational skills and cognitive skills as more important, and required at higher levels, for their job. 2) Leader-Member Exchange and goal structure also have effects on job analysis ratings by incumbents of the same job at interpersonal level. In good quality LMX relationship, news reporters rated decision-making activities and interpersonal activities as more important to their job. On the other side, when book editors structured their goals as cooperative with others’, they provided higher importance ratings on reasoning and interpersonal skills, and related personality requirements, as well as higher level ratings on reasoning abilities. 3) Worker requirements for the identical job are distinct from one organization to another. Specifically, there were between-organization differences in achievement orientation and conscientiousness related personality requirements. In addition, two dimensions of organizational culture, achievement-oriented culture and integrity-oriented culture in particular, were significantly associated with importance ratings of achievement orientation and conscientiousness related personality requirements respectively. Furthermore, achievement-oriented culture both directly and indirect (through job involvement) influenced achievement orientation related personality requirements. The results indicate that variation in job analysis ratings provided by incumbents of the same job may be meaningful. Future job analysis studies and practices should consider the impacts of these individual, interpersonal and organizational level factors on job analysis information. The results also have important implications for employee motivation concerning how organizational demands can be transformed into specific job and worker requirements.
Resumo:
A number of functional neuroimaging studies with skilled readers consistently showed activation to visual words in the left mid-fusiform cortex in occipitotemporal sulcus (LMFC-OTS). Neuropsychological studies also showed that lesions at left ventral occipitotemporal areas result in impairment in visual word processing. Based on these empirical observations and some theoretical speculations, a few researchers postulated that the LMFC-OTS is responsible for instant parallel and holistic extraction of the abstract representation of letter strings, and labeled this piece of cortex as “visual word form area” (VWFA). Nonetheless, functional neuroimaging studies alone is basically a correlative rather than causal approach, and lesions in the previous studies were typically not constrained within LMFC-OTS but also involving other brain regions beyond this area. Given these limitations, it remains unanswered for three fundamental questions: is LMFC-OTS necessary for visual word processing? is this functionally selective for visual word processing while unnecessary for processing of non-visual word stimuli? what are its function properties in visual word processing? This thesis aimed to address these questions through a series of neuropsychological, anatomical and functional MRI experiments in four patients with different degrees of impairments in the left fusiform gyrus. Necessity: Detailed analysis of anatomical brain images revealed that the four patients had differential foci of brain infarction. Specifically, the LMFC-OTS was damaged in one patient, while it remained intact in the other three. Neuropsychological experiments showed that the patient with lesions in the LMFC-OTS had severe impairments in reading aloud and recognizing Chinese characters, i.e., pure alexia. The patient with intact LMFC-OTS but information from the left visual field (LVF) was blocked due to lesions in the splenium of corpus callosum, showed impairment in Chinese characters recognition when the stimuli were presented in the LVF but not in the RVF, i.e. left hemialexia. In contrast, the other two patients with intact LMFC-OTS had normal function in processing Chinese characters. The fMRI experiments demonstrated that there was no significant activation to Chinese characters in the LMFC-OTS of the pure alexic patient and of the patient with left hemialexia when the stimuli were presented in the LVF. On the other hand, this patient, when Chinese characters were presented in right visual field, and the other two with intact LMFC-OTS had activation in the LMFC-OTS. These results together point to the necessity of the LMFC-OTS for Chinese character processing. Selectivity: We tested selectivity of the LMFC-OTS for visual word processing through systematically examining the patients’ ability for processing visual vs. auditory words, and word vs. non-word visual stimuli, such as faces, objects and colors. Results showed that the pure alexic patients could normally process auditory words (expression, understanding and repetition of orally presented words) and non-word visual stimuli (faces, objects, colors and numbers). Although the patient showed some impairments in naming faces, objects and colors, his performance scores were only slightly lower or not significantly different relative to those of the patients with intact LMFC-OTS. These data provide compelling evidence that the LMFC-OTS is not requisite for processing non-visual word stimuli, thus has selectivity for visual word processing. Functional properties: With tasks involving multiple levels and aspects of word processing, including Chinese character reading, phonological judgment, semantic judgment, identity judgment of abstract visual word representation, lexical decision, perceptual judgment of visual word appearance, and dictation, copying, voluntary writing, etc., we attempted to reveal the most critical dysfunction caused by damage in the LMFC-OTS, thus to clarify the most essential function of this region. Results showed that in addition to dysfunctions in Chinese character reading, phonological and semantic judgment, the patient with lesions at LMFC-OTS failed to judge correctly whether two characters (including compound and simple characters) with different surface features (e.g., different fonts, printed vs. handwritten vs. calligraphy styles, simplified characters vs. traditional characters, different orientations of strokes or whole characters) had the same abstract representation. The patient initially showed severe impairments in processing both simple characters and compound characters. He could only copy a compound character in a stroke-by-stroke manner, but not by character-by-character or even by radical-by-radical manners. During the recovery process, namely five months later, the patient could complete the abstract representation tasks of simple characters, but showed no improvement for compound characters. However, he then could copy compound characters in a radical-by-radical manner. Furthermore, it seems that the recovery of copying paralleled to that of judgment of abstract representation. These observations indicate that lesions of the LMFC-OTS in the pure alexic patients caused several damage in the ability of extracting the abstract representation from lower level units to higher level units, and the patient had especial difficulty to extract the abstract representation of whole character from its secondary units (e.g., radicals or single characters) and this ability was resistant to recover from impairment. Therefore, the LMFC-OTS appears to be responsible for the multilevel (particularly higher levels) abstract representations of visual word form. Successful extraction seems independent on access to phonological and semantic information, given the alexic patient showed severe impairments in reading aloud and semantic processing on simple characters while maintenance of intact judgment on their abstract representation. However, it is also possible that the interaction between the abstract representation and its related information e.g. phonological and semantic information was damaged as well in this patient. Taken together, we conclude that: 1) the LMFC-OTS is necessary for Chinese character processing, 2) it is selective for Chinese character processing, and 3) its critical function is to extract multiple levels of abstract representation of visual word and possibly to transmit it to phonological and semantic systems.
Resumo:
There has been considerable work done in the study of Web reference streams: sequences of requests for Web objects. In particular, many studies have looked at the locality properties of such streams, because of the impact of locality on the design and performance of caching and prefetching systems. However, a general framework for understanding why reference streams exhibit given locality properties has not yet emerged. In this work we take a first step in this direction, based on viewing the Web as a set of reference streams that are transformed by Web components (clients, servers, and intermediaries). We propose a graph-based framework for describing this collection of streams and components. We identify three basic stream transformations that occur at nodes of the graph: aggregation, disaggregation and filtering, and we show how these transformations can be used to abstract the effects of different Web components on their associated reference streams. This view allows a structured approach to the analysis of why reference streams show given properties at different points in the Web. Applying this approach to the study of locality requires good metrics for locality. These metrics must meet three criteria: 1) they must accurately capture temporal locality; 2) they must be independent of trace artifacts such as trace length; and 3) they must not involve manual procedures or model-based assumptions. We describe two metrics meeting these criteria that each capture a different kind of temporal locality in reference streams. The popularity component of temporal locality is captured by entropy, while the correlation component is captured by interreference coefficient of variation. We argue that these metrics are more natural and more useful than previously proposed metrics for temporal locality. We use this framework to analyze a diverse set of Web reference traces. We find that this framework can shed light on how and why locality properties vary across different locations in the Web topology. For example, we find that filtering and aggregation have opposing effects on the popularity component of the temporal locality, which helps to explain why multilevel caching can be effective in the Web. Furthermore, we find that all transformations tend to diminish the correlation component of temporal locality, which has implications for the utility of different cache replacement policies at different points in the Web.
Resumo:
— Consideration of how people respond to the question What is this? has suggested new problem frontiers for pattern recognition and information fusion, as well as neural systems that embody the cognitive transformation of declarative information into relational knowledge. In contrast to traditional classification methods, which aim to find the single correct label for each exemplar (This is a car), the new approach discovers rules that embody coherent relationships among labels which would otherwise appear contradictory to a learning system (This is a car, that is a vehicle, over there is a sedan). This talk will describe how an individual who experiences exemplars in real time, with each exemplar trained on at most one category label, can autonomously discover a hierarchy of cognitive rules, thereby converting local information into global knowledge. Computational examples are based on the observation that sensors working at different times, locations, and spatial scales, and experts with different goals, languages, and situations, may produce apparently inconsistent image labels, which are reconciled by implicit underlying relationships that the network’s learning process discovers. The ARTMAP information fusion system can, moreover, integrate multiple separate knowledge hierarchies, by fusing independent domains into a unified structure. In the process, the system discovers cross-domain rules, inferring multilevel relationships among groups of output classes, without any supervised labeling of these relationships. In order to self-organize its expert system, the ARTMAP information fusion network features distributed code representations which exploit the model’s intrinsic capacity for one-to-many learning (This is a car and a vehicle and a sedan) as well as many-to-one learning (Each of those vehicles is a car). Fusion system software, testbed datasets, and articles are available from http://cns.bu.edu/techlab.
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.
Resumo:
One way we keep track of our movements is by monitoring corollary discharges or internal copies of movement commands. This study tested a hypothesis that the pathway from superior colliculus (SC) to mediodorsal thalamus (MD) to frontal eye field (FEF) carries a corollary discharge about saccades made into the contralateral visual field. We inactivated the MD relay node with muscimol in monkeys and measured corollary discharge deficits using a double-step task: two sequential saccades were made to the locations of briefly flashed targets. To make second saccades correctly, monkeys had to internally monitor their first saccades; therefore deficits in the corollary discharge representation of first saccades should disrupt second saccades. We found, first, that monkeys seemed to misjudge the amplitudes of their first saccades; this was revealed by systematic shifts in second saccade end points. Thus corollary discharge accuracy was impaired. Second, monkeys were less able to detect trial-by-trial variations in their first saccades; this was revealed by reduced compensatory changes in second saccade angles. Thus corollary discharge precision also was impaired. Both deficits occurred only when first saccades went into the contralateral visual field. Single-saccade generation was unaffected. Additional deficits occurred in reaction time and overall performance, but these were bilateral. We conclude that the SC-MD-FEF pathway conveys a corollary discharge used for coordinating sequential saccades and possibly for stabilizing vision across saccades. This pathway is the first elucidated in what may be a multilevel chain of corollary discharge circuits extending from the extraocular motoneurons up into cerebral cortex.
Resumo:
Introduction and Aims: In recent years, unprecedented levels of Internet access and the widespread growth of emergent communication technologies have resulted in significantly greater population access for substance use researchers. Despite the research potential of such technologies, the use of the Internet to recruit individuals for participation in event-level research has been limited. The purpose of this paper is to provide a brief account of the methods and results from an online daily diary study of alcohol use. Design and Methods: Participants were recruited using Amazon's Mechanical Turk. Eligible participants completed a brief screener assessing demographics and health behaviours, with a subset of individuals subsequently recruited to participate in a 2 week daily diary study of alcohol use. Results: Multilevel models of the daily alcohol data derived from the Mechanical Turk sample (n=369) replicated several findings commonly reported in daily diary studies of alcohol use. Discussion and Conclusions: Results demonstrate that online participant recruitment and survey administration can be a fruitful method for conducting daily diary alcohol research. © 2014 Australasian Professional Society on Alcohol and other Drugs.
Resumo:
This paper is part of a collaborative project being undertaken by the three leading universities of Brussels, VUB, ULB and USL-B supported by Innnoviris. The project called Media Clusters Brussels - MCB - started in October 2014 with the goal to analyze the development of a Media Park around the two public broadcasters at the site of Reyers in Brussels being host of a media cluster in the capital city. Not only policymakers but also many authors recognized in the last decade that the media industry is characterized from a geographical point of view by a heavy concentration to a limited number of large cities, where media clusters have emerged (Karlsson & Picard, 2011). The common assumption about media clusters is that locating inside a regional agglomeration of related actors brings advantages for these firms. Especially, the interrelations and interactions between the actors on a social level matter for the shape and efficiency of the agglomerations (Picard, 2008). However, even though the importance of the actors and their interrelations has been a common assumption, many authors solely focus on the macro-economical aspects of the clusters. Within this paper, we propose to realize a socio-economical analysis of media clusters to make informed decisions in the development and so, bring the social (human) factor back into scope. Therefore, this article focuses on the development of a novel valuable framework, the so-called 7P framework with a multilevel and interdisciplinary approach, which includes three aspects, which have been identified as emerging success-factors of media clusters: partnerships, (media) professionals and positive spillovers.