989 resultados para system requirement
Resumo:
This study investigates the superposition-based cooperative transmission system. In this system, a key point is for the relay node to detect data transmitted from the source node. This issued was less considered in the existing literature as the channel is usually assumed to be flat fading and a priori known. In practice, however, the channel is not only a priori unknown but subject to frequency selective fading. Channel estimation is thus necessary. Of particular interest is the channel estimation at the relay node which imposes extra requirement for the system resources. The authors propose a novel turbo least-square channel estimator by exploring the superposition structure of the transmission data. The proposed channel estimator not only requires no pilot symbols but also has significantly better performance than the classic approach. The soft-in-soft-out minimum mean square error (MMSE) equaliser is also re-derived to match the superimposed data structure. Finally computer simulation results are shown to verify the proposed algorithm.
Resumo:
Due to the requirement to demonstrate financial feasibility of policy proposals and scheme-specific planning obligations, development viability and development appraisal have become core themes in the English planning system. The objective of this paper is to evaluate the application of development appraisal in practice. The paper reviews the literature and the models available to assess the viability of development and analyses a sample 19 development viability appraisals to identify practice. The paper concludes that the practice of development appraisal deviates significantly from the tenets of capital budgeting theory. In particular, in addition to a propensity to oversimplify the timing of income and expenditure, the way in which debt, developer’s return and value and cost change are handled in practice illustrates a major gap between mainstream capital budgeting theory and development appraisal in practice.
Resumo:
In general, particle filters need large numbers of model runs in order to avoid filter degeneracy in high-dimensional systems. The recently proposed, fully nonlinear equivalent-weights particle filter overcomes this requirement by replacing the standard model transition density with two different proposal transition densities. The first proposal density is used to relax all particles towards the high-probability regions of state space as defined by the observations. The crucial second proposal density is then used to ensure that the majority of particles have equivalent weights at observation time. Here, the performance of the scheme in a high, 65 500 dimensional, simplified ocean model is explored. The success of the equivalent-weights particle filter in matching the true model state is shown using the mean of just 32 particles in twin experiments. It is of particular significance that this remains true even as the number and spatial variability of the observations are changed. The results from rank histograms are less easy to interpret and can be influenced considerably by the parameter values used. This article also explores the sensitivity of the performance of the scheme to the chosen parameter values and the effect of using different model error parameters in the truth compared with the ensemble model runs.
Resumo:
Three assays were carried out to determine the digestible methionine+cystine (Met+Cys) requirement for ISA Label broilers from both sexes. The birds were reared in free range system on starting phase (1 to 28 days), growing phase (28 to 56 days) and finishing phase (56 to 84 days). Four hundred and eighty birds were distributed into 24 pens, each one composed of shelter (3.13 m(2)) and pasture (72.87 m(2)). The experimental design was completely randomized with eight treatments as factorial arrangement (four Met+Cys levels and two sexes) with three replicates of 20 birds. The digestible Met+Cys levels were 0.532; 0.652; 0.772; 0.892% for starting phase; 0.515; 0.635; 0.755; 0.875% for growing phase and 0.469; 0.589; 0.709; 0.829% for finishing phase. The analyzed parameters were performance, carcass yield, body protein and fat deposition, weight and protein concentration in feathers. In the starting phase, the digestible Met+Cys level estimated for males was 0.765 and 0.803% for females, corresponding to 0.252 and 0.268% of Met+Cys/Mcal of ME, respectively. For the growing phase, the digestible Met+Cys level estimated was 0.716% for both sexes, corresponding to 0.235% of Met+Cys/Mcal of ME. For the finishing phase, the Met+Cys levels were 0.756 and 0.597% for males and females, corresponding to 0.244 and 0.193% of Met+Cys/Mcal of ME respectively.
Resumo:
In this paper, a thermoeconomic analysis method based on the First and the Second Law of Thermodynamics and applied to analyse the replacement of an equipment of a cogeneration system is presented. The cogeneration system consists of a gas turbine linked to a waste boiler. The electrical demand of the campus is approximately 9 MW but the cogen system generates approximately one third of the university requirement as well as 1.764 kg/s of saturated steam (at 0.861 MPa), approximately, from a single fuel source. The energy-economic study showed that the best system, based on pay-back period and based on the maximum savings (in 10 years), was the system that used the gas turbine M1T-06 of Kawasaki Heavy Industries and the system that used the gas turbine CCS7 of Hitachi Zosen, respectively. The exergy-economic study showed that the best system, which has the lowest EMC, was the system that used the gas turbine ASE50 of Allied Signal. © 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The software industry has become more and more concerned with the appropriate application of activities that composes requirement engineering as a way to improve the quality of its products. In order to support these activities, several computational tools have been available in the market, although it is still possible to find a lack of resources related to some activities. In this context, this paper proposes the inclusion of a module to aid in the requirements specification to a tool called Requirements Elicitation Support Tool. This module allows to specify requirements in accordance with IEEE 830 standard, thus contributing to the documentation of the requirements established for a software system, besides supporting the learning of concepts related to the requirements specification, which improves the skills of users of the tool. © 2012 IEEE.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
2-Cys peroxiredoxin (Prx) enzymes are ubiquitously distributed peroxidases that make use of a peroxidatic cysteine (Cys(P)) to decompose hydroperoxides. A disulfide bond is generated as a consequence of the partial unfolding of the alpha-helix that contains Cys(P). Therefore, during its catalytic cycle, 2-Cys Prx alternates between two states, locally unfolded and fully folded. Tsa1 (thiol-specific antioxidant protein 1 from yeast) is by far the most abundant Cys-based peroxidase in Saccharomyces cerevisiae. In this work, we present the crystallographic structure at 2.8 angstrom resolution of Tsa1(C47S) in the decameric form [(alpha(2))(5)] with a DTT molecule bound to the active site, representing one of the few available reports of a 2-Cys Prx (AhpC-Prx1 subfamily) (AhpC, alkyl hydroperoxide reductase subunit C) structure that incorporates a ligand. The analysis of the Tsa1(C47S) structure indicated that G1u50 and Arg146 participate in the stabilization of the Cys(P) alpha-helix. As a consequence, we raised the hypothesis that G1u50 and Arg146 might be relevant to the Cys(P) reactivity. Therefore, Tsa1(E50A) and Tsa1(R146Q) mutants were generated and were still able to decompose hydrogen peroxide, presenting a second-order rate constant in the range of 10(6) M-1 S-1. Remarkably, although Tsa1(E50A) and Tsa1(R146Q) were efficiently reduced by the low-molecular-weight reductant DTT, these mutants displayed only marginal thioredoxin (Trx)-dependent peroxidase activity, indicating that G1u50 and Arg146 are important for the Tsa1-Trx interaction. These results may impact the comprehension of downstream events of signaling pathways that are triggered by the oxidation of critical Cys residues, such as Trx. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Companies are currently choosing to integrate logics and systems to achieve better solutions. These combinations also include companies striving to join the logic of material requirement planning (MRP) system with the systems of lean production. The purpose of this article was to design an MRP as part of the implementation of an enterprise resource planning (ERP) in a company that produces agricultural implements, which has used the lean production system since 1998. This proposal is based on the innovation theory, theory networks, lean production systems, ERP systems and the hybrid production systems, which use both components and MRP systems, as concepts of lean production systems. The analytical approach of innovation networks enables verification of the links and relationships among the companies and departments of the same corporation. The analysis begins with the MRP implementation project carried out in a Brazilian metallurgical company and follows through the operationalisation of the MRP project, until its production stabilisation. The main point is that the MRP system should help the company's operations with regard to its effective agility to respond in time to demand fluctuations, facilitating the creation process and controlling the branch offices in other countries that use components produced in the matrix, hence ensuring more accurate estimates of stockpiles. Consequently, it presents the enterprise knowledge development organisational modelling methodology in order to represent further models (goals, actors and resources, business rules, business process and concepts) that should be included in this MRP implementation process for the new configuration of the production system.
Resumo:
Aspects related to the users' cooperative work are not considered in the traditional approach of software engineering, since the user is viewed independently of his/her workplace environment or group, with the individual model generalized to the study of collective behavior of all users. This work proposes a process for software requirements to address issues involving cooperative work in information systems that provide distributed coordination in the users' actions and the communication among them occurs indirectly through the data entered while using the software. To achieve this goal, this research uses ergonomics, the 3C cooperation model, awareness and software engineering concepts. Action-research is used as a research methodology applied in three cycles during the development of a corporate workflow system in a technological research company. This article discusses the third cycle, which corresponds to the process that deals with the refinement of the cooperative work requirements with the software in actual use in the workplace, where the inclusion of a computer system changes the users' workplace, from the face to face interaction to the interaction mediated by the software. The results showed that the highest degree of users' awareness about their activities and other system users contribute to a decrease in their errors and in the inappropriate use of the system.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
An Adaptive Optic (AO) system is a fundamental requirement of 8m-class telescopes. We know that in order to obtain the maximum possible resolution allowed by these telescopes we need to correct the atmospheric turbulence. Thanks to adaptive optic systems we are able to use all the effective potential of these instruments, drawing all the information from the universe sources as best as possible. In an AO system there are two main components: the wavefront sensor (WFS) that is able to measure the aberrations on the incoming wavefront in the telescope, and the deformable mirror (DM) that is able to assume a shape opposite to the one measured by the sensor. The two subsystem are connected by the reconstructor (REC). In order to do this, the REC requires a “common language" between these two main AO components. It means that it needs a mapping between the sensor-space and the mirror-space, called an interaction matrix (IM). Therefore, in order to operate correctly, an AO system has a main requirement: the measure of an IM in order to obtain a calibration of the whole AO system. The IM measurement is a 'mile stone' for an AO system and must be done regardless of the telescope size or class. Usually, this calibration step is done adding to the telescope system an auxiliary artificial source of light (i.e a fiber) that illuminates both the deformable mirror and the sensor, permitting the calibration of the AO system. For large telescope (more than 8m, like Extremely Large Telescopes, ELTs) the fiber based IM measurement requires challenging optical setups that in some cases are also impractical to build. In these cases, new techniques to measure the IM are needed. In this PhD work we want to check the possibility of a different method of calibration that can be applied directly on sky, at the telescope, without any auxiliary source. Such a technique can be used to calibrate AO system on a telescope of any size. We want to test the new calibration technique, called “sinusoidal modulation technique”, on the Large Binocular Telescope (LBT) AO system, which is already a complete AO system with the two main components: a secondary deformable mirror with by 672 actuators, and a pyramid wavefront sensor. My first phase of PhD work was helping to implement the WFS board (containing the pyramid sensor and all the auxiliary optical components) working both optical alignments and tests of some optical components. Thanks to the “solar tower” facility of the Astrophysical Observatory of Arcetri (Firenze), we have been able to reproduce an environment very similar to the telescope one, testing the main LBT AO components: the pyramid sensor and the secondary deformable mirror. Thanks to this the second phase of my PhD thesis: the measure of IM applying the sinusoidal modulation technique. At first we have measured the IM using a fiber auxiliary source to calibrate the system, without any kind of disturbance injected. After that, we have tried to use this calibration technique in order to measure the IM directly “on sky”, so adding an atmospheric disturbance to the AO system. The results obtained in this PhD work measuring the IM directly in the Arcetri solar tower system are crucial for the future development: the possibility of the acquisition of IM directly on sky means that we are able to calibrate an AO system also for extremely large telescope class where classic IM measurements technique are problematic and, sometimes, impossible. Finally we have not to forget the reason why we need this: the main aim is to observe the universe. Thanks to these new big class of telescopes and only using their full capabilities, we will be able to increase our knowledge of the universe objects observed, because we will be able to resolve more detailed characteristics, discovering, analyzing and understanding the behavior of the universe components.
Resumo:
Over the last 60 years, computers and software have favoured incredible advancements in every field. Nowadays, however, these systems are so complicated that it is difficult – if not challenging – to understand whether they meet some requirement or are able to show some desired behaviour or property. This dissertation introduces a Just-In-Time (JIT) a posteriori approach to perform the conformance check to identify any deviation from the desired behaviour as soon as possible, and possibly apply some corrections. The declarative framework that implements our approach – entirely developed on the promising open source forward-chaining Production Rule System (PRS) named Drools – consists of three components: 1. a monitoring module based on a novel, efficient implementation of Event Calculus (EC), 2. a general purpose hybrid reasoning module (the first of its genre) merging temporal, semantic, fuzzy and rule-based reasoning, 3. a logic formalism based on the concept of expectations introducing Event-Condition-Expectation rules (ECE-rules) to assess the global conformance of a system. The framework is also accompanied by an optional module that provides Probabilistic Inductive Logic Programming (PILP). By shifting the conformance check from after execution to just in time, this approach combines the advantages of many a posteriori and a priori methods proposed in literature. Quite remarkably, if the corrective actions are explicitly given, the reactive nature of this methodology allows to reconcile any deviations from the desired behaviour as soon as it is detected. In conclusion, the proposed methodology brings some advancements to solve the problem of the conformance checking, helping to fill the gap between humans and the increasingly complex technology.
Resumo:
In this paper, we present a novel technique for the removal of astigmatism in submillimeter-wave optical systems through employment of a specific combination of so-called astigmatic off-axis reflectors. This technique treats an orthogonally astigmatic beam using skew Gaussian beam analysis, from which an anastigmatic imaging network is derived. The resultant beam is considered truly stigmatic, with all Gaussian beam parameters in the orthogonal directions being matched. This is thus considered an improvement over previous techniques wherein a beam corrected for astigmatism has only the orthogonal beam amplitude radii matched, with phase shift and phase radius of curvature not considered. This technique is computationally efficient, negating the requirement for computationally intensive numerical analysis of shaped reflector surfaces. The required optical surfaces are also relatively simple to implement compared to such numerically optimized shaped surfaces. This technique is implemented in this work as part of the complete optics train for the STEAMR antenna. The STEAMR instrument is envisaged as a mutli-beam limb sounding instrument operating at submillimeter wavelengths. The antenna optics arrangement for this instrument uses multiple off-axis reflectors to control the incident radiation and couple them to their corresponding receiver feeds. An anastigmatic imaging network is successfully implemented into an optical model of this antenna, and the resultant design ensures optimal imaging of the beams to the corresponding feed horns. This example also addresses the challenges of imaging in multi-beam antenna systems.