996 resultados para Design Rule
Resumo:
This letter proposes the use of a refractive index profile with a graded core and a cladding trench for the design of few-mode fibers, aiming an arbitrary differential mode delay (DMD) flattened over the C+ L band. By optimizing the core grading exponent and the dimensioning of the trench, a deviation lower than 0.01 ps/km from a target DMD is observed over the investigated wavelength range. Additionally, it is found that the dimensioning of the trench is almost independent of the target DMD, thereby enabling the use of a simple design rule that guarantees a maximum DMD deviation of 1.8 ps/km for a DMD target between-200 and 200 ps/km. © 2012 IEEE.
Resumo:
The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
Design of liquid retaining structures involves many decisions to be made by the designer based on rules of thumb, heuristics, judgment, code of practice and previous experience. Various design parameters to be chosen include configuration, material, loading, etc. A novice engineer may face many difficulties in the design process. Recent developments in artificial intelligence and emerging field of knowledge-based system (KBS) have made widespread applications in different fields. However, no attempt has been made to apply this intelligent system to the design of liquid retaining structures. The objective of this study is, thus, to develop a KBS that has the ability to assist engineers in the preliminary design of liquid retaining structures. Moreover, it can provide expert advice to the user in selection of design criteria, design parameters and optimum configuration based on minimum cost. The development of a prototype KBS for the design of liquid retaining structures (LIQUID), using blackboard architecture with hybrid knowledge representation techniques including production rule system and object-oriented approach, is presented in this paper. An expert system shell, Visual Rule Studio, is employed to facilitate the development of this prototype system. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper delineates the development of a prototype hybrid knowledge-based system for the optimum design of liquid retaining structures by coupling the blackboard architecture, an expert system shell VISUAL RULE STUDIO and genetic algorithm (GA). Through custom-built interactive graphical user interfaces under a user-friendly environment, the user is directed throughout the design process, which includes preliminary design, load specification, model generation, finite element analysis, code compliance checking, and member sizing optimization. For structural optimization, GA is applied to the minimum cost design of structural systems with discrete reinforced concrete sections. The design of a typical example of the liquid retaining structure is illustrated. The results demonstrate extraordinarily converging speed as near-optimal solutions are acquired after merely exploration of a small portion of the search space. This system can act as a consultant to assist novice designers in the design of liquid retaining structures.
Resumo:
This paper describes a coupled knowledge-based system (KBS) for the design of liquid-retaining structures, which can handle both the symbolic knowledge processing based on engineering heuristics in the preliminary synthesis stage and the extensive numerical crunching involved in the detailed analysis stage. The prototype system is developed by employing blackboard architecture and a commercial shell VISUAL RULE STUDIO. Its present scope covers design of three types of liquid-retaining structures, namely, a rectangular shape with one compartment, a rectangular shape with two compartments and a circular shape. Through custom-built interactive graphical user interfaces, the user is directed throughout the design process, which includes preliminary design, load specification, model generation, finite element analysis, code compliance checking and member sizing optimization. It is also integrated with various relational databases that provide the system with sectional properties, moment and shear coefficients and final member details. This system can act as a consultant to assist novice designers in the design of liquid-retaining structures with increase in efficiency and optimization of design output and automated record keeping. The design of a typical example of the liquid-retaining structure is also illustrated. (C) 2003 Elsevier B.V All rights reserved.
Resumo:
Demand response can play a very relevant role in the context of power systems with an intensive use of distributed energy resources, from which renewable intermittent sources are a significant part. More active consumers participation can help improving the system reliability and decrease or defer the required investments. Demand response adequate use and management is even more important in competitive electricity markets. However, experience shows difficulties to make demand response be adequately used in this context, showing the need of research work in this area. The most important difficulties seem to be caused by inadequate business models and by inadequate demand response programs management. This paper contributes to developing methodologies and a computational infrastructure able to provide the involved players with adequate decision support on demand response programs and contracts design and use. The presented work uses DemSi, a demand response simulator that has been developed by the authors to simulate demand response actions and programs, which includes realistic power system simulation. It includes an optimization module for the application of demand response programs and contracts using deterministic and metaheuristic approaches. The proposed methodology is an important improvement in the simulator while providing adequate tools for demand response programs adoption by the involved players. A machine learning method based on clustering and classification techniques, resulting in a rule base concerning DR programs and contracts use, is also used. A case study concerning the use of demand response in an incident situation is presented.
Resumo:
We characterize the optimal job design in a multitasking environment when the firms rely on implicit incentive contracts (i.e., bonus payments). Two natural forms of job design are compared: (i) individual accountability, where each agent is assigned to a particular job and assumes full responsibility for its outcome; and (ii) team accountability, where a group of agents share responsibility for a job and are jointly accountable for its outcome. The key trade-off is that team accountability mitigates the multitasking problem but may weaken the implicit contracts. The optimal job design follows a cut-off rule: firms with high reputation concerns opt for team accountability, whereas firms with low reputation concerns opt for individual accountability. Team accountability is more likely the more acute the multitasking problem is. However, the cut-off rule need not hold if the firm combines implicit incentives with explicit pay-per-performance contracts.
Resumo:
The possibility of local elastic instabilities is considered in a first¿order structural phase transition, typically a thermoelastic martensitic transformation, with associated interfacial and volumic strain energy. They appear, for instance, as the result of shape change accommodation by simultaneous growth of different crystallographic variants. The treatment is phenomenological and deals with growth in both thermoelastic equilibrium and in nonequilibrium conditions produced by the elastic instability. Scaling of the transformed fraction curves against temperature is predicted only in the case of purely thermoelastic growth. The role of the transformation latent heat on the relaxation kinetics is also considered, and it is shown that it tends to increase the characteristic relaxation times as adiabatic conditions are approached, by keeping the system closer to a constant temperature. The analysis also reveals that the energy dissipated in the relaxation process has a double origin: release of elastic energy Wi and entropy production Si. The latter is shown to depend on both temperature rate and thermal conduction in the system.
Resumo:
BACKGROUND: The Marburg Heart Score (MHS) aims to assist GPs in safely ruling out coronary heart disease (CHD) in patients presenting with chest pain, and to guide management decisions. AIM: To investigate the diagnostic accuracy of the MHS in an independent sample and to evaluate the generalisability to new patients. DESIGN AND SETTING: Cross-sectional diagnostic study with delayed-type reference standard in general practice in Hesse, Germany. METHOD: Fifty-six German GPs recruited 844 males and females aged ≥ 35 years, presenting between July 2009 and February 2010 with chest pain. Baseline data included the items of the MHS. Data on the subsequent course of chest pain, investigations, hospitalisations, and medication were collected over 6 months and were reviewed by an independent expert panel. CHD was the reference condition. Measures of diagnostic accuracy included the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, likelihood ratios, and predictive values. RESULTS: The AUC was 0.84 (95% confidence interval [CI] = 0.80 to 0.88). For a cut-off value of 3, the MHS showed a sensitivity of 89.1% (95% CI = 81.1% to 94.0%), a specificity of 63.5% (95% CI = 60.0% to 66.9%), a positive predictive value of 23.3% (95% CI = 19.2% to 28.0%), and a negative predictive value of 97.9% (95% CI = 96.2% to 98.9%). CONCLUSION: Considering the diagnostic accuracy of the MHS, its generalisability, and ease of application, its use in clinical practice is recommended.
Resumo:
BACKGROUND: The diagnosis of Pulmonary Embolism (PE) in the emergency department (ED) is crucial. As emergency physicians fear missing this potential life-threatening condition, PE tends to be over-investigated, exposing patients to unnecessary risks and uncertain benefit in terms of outcome. The Pulmonary Embolism Rule-out Criteria (PERC) is an eight-item block of clinical criteria that can identify patients who can safely be discharged from the ED without further investigation for PE. The endorsement of this rule could markedly reduce the number of irradiative imaging studies, ED length of stay, and rate of adverse events resulting from both diagnostic and therapeutic interventions. Several retrospective and prospective studies have shown the safety and benefits of the PERC rule for PE diagnosis in low-risk patients, but the validity of this rule is still controversial. We hypothesize that in European patients with a low gestalt clinical probability and who are PERC-negative, PE can be safely ruled out and the patient discharged without further testing. METHODS/DESIGN: This is a controlled, cluster randomized trial, in 15 centers in France. Each center will be randomized for the sequence of intervention periods: a 6-month intervention period (PERC-based strategy) followed by a 6-month control period (usual care), or in reverse order, with 2 months of "wash-out" between the 2 periods. Adult patients presenting to the ED with a suspicion of PE and a low pre test probability estimated by clinical gestalt will be eligible. The primary outcome is the percentage of failure resulting from the diagnostic strategy, defined as diagnosed venous thromboembolic events at 3-month follow-up, among patients for whom PE has been initially ruled out. DISCUSSION: The PERC rule has the potential to decrease the number of irradiative imaging studies in the ED, and is reported to be safe. However, no randomized study has ever validated the safety of PERC. Furthermore, some studies have challenged the safety of a PERC-based strategy to rule-out PE, especially in Europe where the prevalence of PE diagnosed in the ED is high. The PROPER study should provide high-quality evidence to settle this issue. If it confirms the safety of the PERC rule, physicians will be able to reduce the number of investigations, associated subsequent adverse events, costs, and ED length of stay for patients with a low clinical probability of PE. TRIAL REGISTRATION: NCT02375919 .
Resumo:
It is common practice to initiate supplemental feeding in newborns if body weight decreases by 7-10% in the first few days after birth (7-10% rule). Standard hospital procedure is to initiate intravenous therapy once a woman is admitted to give birth. However, little is known about the relationship between intrapartum intravenous therapy and the amount of weight loss in the newborn. The present research was undertaken in order to determine what factors contribute to weight loss in a newborn, and to examine the relationship between the practice of intravenous intrapartum therapy and the extent of weight loss post-birth. Using a cross-sectional design with a systematic random sample of 100 mother-baby dyads, we examined properties of delivery that have the potential to impact weight loss in the newborn, including method of delivery, parity, duration of labour, volume of intravenous therapy, feeding method, and birth attendant. This study indicated that the volume of intravenous therapy and method of delivery are significant predictors of weight loss in the newborn (R2=15.5, p<0.01). ROC curve analysis identified an intravenous volume cut-point of 1225 ml that would elicit a high measure of sensitivity (91.3%), and demonstrated significant Kappa agreement (p<0.01) with excess newborn weight loss. It was concluded that infusion of intravenous therapy and natural birth delivery are discriminant factors that influence excess weight loss in newborn infants. Acknowledgement of these factors should be considered in clinical practice.