108 resultados para Design system
Resumo:
Cross-layer techniques represent efficient means to enhance throughput and increase the transmission reliability of wireless communication systems. In this paper, a cross-layer design of aggressive adaptive modulation and coding (A-AMC), truncated automatic repeat request (T-ARQ), and user scheduling is proposed for multiuser multiple-input-multiple-output (MIMO) maximal ratio combining (MRC) systems, where the impacts of feedback delay (FD) and limited feedback (LF) on channel state information (CSI) are also considered. The A-AMC and T-ARQ mechanism selects the appropriate modulation and coding schemes (MCSs) to achieve higher spectral efficiency while satisfying the service requirement on the packet loss rate (PLR), profiting from the feasibility of using different MCSs to retransmit a packet, which is destined to a scheduled user selected to exploit multiuser diversity and enhance the system's performance in terms of both transmission efficiency and fairness. The system's performance is evaluated in terms of the average PLR, average spectral efficiency (ASE), outage probability, and average packet delay, which are derived in closed form, considering transmissions over Rayleigh-fading channels. Numerical results and comparisons are provided and show that A-AMC combined with T-ARQ yields higher spectral efficiency than the conventional scheme based on adaptive modulation and coding (AMC), while keeping the achieved PLR closer to the system's requirement and reducing delay. Furthermore, the effects of the number of ARQ retransmissions, numbers of transmit and receive antennas, normalized FD, and cardinality of the beamforming weight vector codebook are studied and discussed.
Resumo:
Using a cross-layer approach, two enhancement techniques applied for adaptive modulation and coding (AMC) with truncated automatic repeat request (T-ARQ) are investigated, namely, aggressive AMC (A-AMC) and constellation rearrangement (CoRe). Aggressive AMC selects the appropriate modulation and coding schemes (MCS) to achieve higher spectral efficiency, profiting from the feasibility of using different MCSs for retransmitting a packet, whereas in the CoRe-based AMC, retransmissions of the same data packet are performed using different mappings so as to provide different degrees of protection to the bits involved, thus achieving mapping diversity gain. The performance of both schemes is evaluated in terms of average spectral efficiency and average packet loss rate, which are derived in closed-form considering transmission over Nakagami-m fading channels. Numerical results and comparisons are provided. In particular, it is shown that A-AMC combined with T-ARQ yields higher spectral efficiency than the AMC-based conventional scheme while keeping the achieved packet loss rate closer to the system's requirement, and that it can achieve larger spectral efficiency objectives than that of the scheme using AMC along with CoRe.
Resumo:
Crystal engineering principles were used to design three new co-crystals of paracetamol. A variety of potential cocrystal formers were initially identified from a search of the Cambridge Structural Database for molecules with complementary hydrogen-bond forming functionalities. Subsequent screening by powder X-ray diffraction of the products of the reaction of this library of molecules with paracetamol led to the discovery of new binary crystalline phases of paracetamol with trans-1,4- diaminocyclohexane (1); trans-1,4-di(4-pyridyl)ethylene (2); and 1,2-bis(4-pyridyl)ethane (3). The co-crystals were characterized by IR spectroscopy, differential scanning calorimetry, and 1H NMR spectroscopy. Single crystal X-ray structure analysis reveals that in all three co-crystals the co-crystal formers (CCF) are hydrogen bonded to the paracetamol molecules through O−H···N interactions. In co-crystals (1) and (2) the CCFs are interleaved between the chains of paracetamol molecules, while in co-crystal (3) there is an additional N−H···N hydrogen bond between the two components. A hierarchy of hydrogen bond formation is observed in which the best donor in the system, the phenolic O−H group of paracetamol, is preferentially hydrogen bonded to the best acceptor, the basic nitrogen atom of the co-crystal former. The geometric aspects of the hydrogen bonds in co-crystals 1−3 are discussed in terms of their electrostatic and charge-transfer components.
Resumo:
Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr�-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m�-2 with 90% uncertainty bounds of (+0.08, +1.27)Wm�-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m�-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m�-2 with 90% uncertainty bounds of +0.17 to +2.1 W m�-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m�-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (�-0.50 to +1.08) W m-�2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (�-0.06 W m�-2 with 90% uncertainty bounds of �-1.45 to +1.29 W m�-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.
Resumo:
Epidemiological and clinical trials reveal compelling evidence for the ability of dietary flavonoids to lower cardiovascular disease risk. The mechanisms of action of these polyphenolic compounds are diverse, and of particular interest is their ability to function as protein and lipid kinase inhibitors. We have previously described structure-activity studies that reinforce the possibility for using flavonoid structures as templates for drug design. In the present study, we aim to begin constructing rational screening strategies for exploiting these compounds as templates for the design of clinically relevant, antiplatelet agents. We used the platelet as a model system to dissect the structural influence of flavonoids, stilbenes, anthocyanidins, and phenolic acids on inhibition of cell signaling and function. Functional groups identified as relevant for potent inhibition of platelet function included at least 2 benzene rings, a hydroxylated B ring, a planar C ring, a C ring ketone group, and a C-2 positioned B ring. Hydroxylation of the B ring with either a catechol group or a single C-4' hydroxyl may be required for efficient inhibition of collagen-stimulated tyrosine phosphorylated proteins of 125 to 130 kDa, but may not be necessary for that of phosphotyrosine proteins at approximately 29 kDa. The removal of the C ring C-3 hydroxyl together with a hydroxylated B ring (apigenin) may confer selectivity for 37 to 38 kDa phosphotyrosine proteins. We conclude that this study may form the basis for construction of maps of flavonoid inhibitory activity on kinase targets that may allow a multitargeted therapeutic approach with analogue counterparts and parent compounds.
Resumo:
In order to overcome divergence of estimation with the same data, the proposed digital costing process adopts an integrated design of information system to design the process knowledge and costing system together. By employing and extending a widely used international standard, industry foundation classes, the system can provide an integrated process which can harvest information and knowledge of current quantity surveying practice of costing method and data. Knowledge of quantification is encoded from literatures, motivation case and standards. It can reduce the time consumption of current manual practice. The further development will represent the pricing process in a Bayesian Network based knowledge representation approach. The hybrid types of knowledge representation can produce a reliable estimation for construction project. In a practical term, the knowledge management of quantity surveying can improve the system of construction estimation. The theoretical significance of this study lies in the fact that its content and conclusion make it possible to develop an automatic estimation system based on hybrid knowledge representation approach.
Resumo:
A flood warning system incorporates telemetered rainfall and flow/water level data measured at various locations in the catchment area. Real-time accurate data collection is required for this use, and sensor networks improve the system capabilities. However, existing sensor nodes struggle to satisfy the hydrological requirements in terms of autonomy, sensor hardware compatibility, reliability and long-range communication. We describe the design and development of a real-time measurement system for flood monitoring, and its deployment in a flash-flood prone 650 km2 semiarid watershed in Southern Spain. A developed low-power and long-range communication device, so-called DatalogV1, provides automatic data gathering and reliable transmission. DatalogV1 incorporates self-monitoring for adapting measurement schedules for consumption management and to capture events of interest. Two tests are used to assess the success of the development. The results show an autonomous and robust monitoring system for long-term collection of water level data in many sparse locations during flood events.
Resumo:
In the UK, architectural design is regulated through a system of design control for the public interest, which aims to secure and promote ‘quality’ in the built environment. Design control is primarily implemented by locally employed planning professionals with political oversight, and independent design review panels, staffed predominantly by design professionals. Design control has a lengthy and complex history, with the concept of ‘design’ offering a range of challenges for a regulatory system of governance. A simultaneously creative and emotive discipline, architectural design is a difficult issue to regulate objectively or consistently, often leading to policy that is regarded highly discretionary and flexible. This makes regulatory outcomes difficult to predict, as approaches undertaken by the ‘agents of control’ can vary according to the individual. The role of the design controller is therefore central, tasked with the responsibility of interpreting design policy and guidance, appraising design quality and passing professional judgment. However, little is really known about what influences the way design controllers approach their task, providing a ‘veil’ over design control, shrouding the basis of their decisions. This research engaged directly with the attitudes and perceptions of design controllers in the UK, lifting this ‘veil’. Using in-depth interviews and Q-Methodology, the thesis explores this hidden element of control, revealing a number of key differences in how controllers approach and implement policy and guidance, conceptualise design quality, and rationalise their evaluations and judgments. The research develops a conceptual framework for agency in design control – this consists of six variables (Regulation; Discretion; Skills; Design Quality; Aesthetics; and Evaluation) and it is suggested that this could act as a ‘heuristic’ instrument for UK controllers, prompting more reflexivity in relation to evaluating their own position, approaches, and attitudes, leading to better practice and increased transparency of control decisions.
Resumo:
A universal systems design process is specified, tested in a case study and evaluated. It links English narratives to numbers using a categorical language framework with mathematical mappings taking the place of conjunctions and numbers. The framework is a ring of English narrative words between 1 (option) and 360 (capital); beyond 360 the ring cycles again to 1. English narratives are shown to correspond to the field of fractional numbers. The process can enable the development, presentation and communication of complex narrative policy information among communities of any scale, on a software implementation known as the "ecoputer". The information is more accessible and comprehensive than that in conventional decision support, because: (1) it is expressed in narrative language; and (2) the narratives are expressed as compounds of words within the framework. Hence option generation is made more effective than in conventional decision support processes including Multiple Criteria Decision Analysis, Life Cycle Assessment and Cost-Benefit Analysis.The case study is of a participatory workshop in UK bioenergy project objectives and criteria, at which attributes were elicited in environmental, economic and social systems. From the attributes, the framework was used to derive consequences at a range of levels of precision; these are compared with the project objectives and criteria as set out in the Case for Support. The design process is to be supported by a social information manipulation, storage and retrieval system for numeric and verbal narratives attached to the "ecoputer". The "ecoputer" will have an integrated verbal and numeric operating system. Novel design source code language will assist the development of narrative policy. The utility of the program, including in the transition to sustainable development and in applications at both community micro-scale and policy macro-scale, is discussed from public, stakeholder, corporate, Governmental and regulatory perspectives.
Resumo:
Dynamic electricity pricing can produce efficiency gains in the electricity sector and help achieve energy policy goals such as increasing electric system reliability and supporting renewable energy deployment. Retail electric companies can offer dynamic pricing to residential electricity customers via smart meter-enabled tariffs that proxy the cost to procure electricity on the wholesale market. Current investments in the smart metering necessary to implement dynamic tariffs show policy makers’ resolve for enabling responsive demand and realizing its benefits. However, despite these benefits and the potential bill savings these tariffs can offer, adoption among residential customers remains at low levels. Using a choice experiment approach, this paper seeks to determine whether disclosing the environmental and system benefits of dynamic tariffs to residential customers can increase adoption. Although sampling and design issues preclude wide generalization, we found that our environmentally conscious respondents reduced their required discount to switch to dynamic tariffs around 10% in response to higher awareness of environmental and system benefits. The perception that shifting usage is easy to do also had a significant impact, indicating the potential importance of enabling technology. Perhaps the targeted communication strategy employed by this study is one way to increase adoption and achieve policy goals.
Resumo:
The Green Feed (GF) system (C-Lock Inc., Rapid City, USA) is used to estimate total daily methane emissions of individual cattle using short-term measurements obtained over several days. Our objective was to compare measurements of methane emission by growing cattle obtained using the GF system with measurements using respiration chambers (RC)or sulphur hexafluoride tracer (SF6). It was hypothesised that estimates of methane emission for individual animals and treatments would be similar for GF compared to RC or SF6 techniques. In experiment 1, maize or grass silage-based diets were fed to four growing Holstein heifers, whilst for experiment 2, four different heifers were fed four haylage treatments. Both experiments were a 4 × 4 Latin square design with 33 day periods. Green Feed measurements of methane emission were obtained over 7 days (days 22–28) and com-pared to subsequent RC measurements over 4 days (days 29–33). For experiment 3, 12growing heifers rotationally grazed three swards for 26 days, with simultaneous GF and SF6 measurements over two 4 day measurement periods (days 15–19 and days 22–26).Overall methane emissions (g/day and g/kg dry matter intake [DMI]) measured using GF in experiments 1 (198 and 26.6, respectively) and 2 (208 and 27.8, respectively) were similar to averages obtained using RC (218 and 28.3, respectively for experiment 1; and 209 and 27.7, respectively, for experiment 2); but there was poor concordance between the two methods (0.1043 for experiments 1 and 2 combined). Overall, methane emissions measured using SF6 were higher (P<0.001) than GF during grazing (186 vs. 164 g/day), but there was significant (P<0.01) concordance between the two methods (0.6017). There were fewer methane measurements by GF under grazing conditions in experiment 3 (1.60/day) com-pared to indoor measurements in experiments 1 (2.11/day) and 2 (2.34/day). Significant treatment effects on methane emission measured using RC and SF6 were not evident for GF measurements, and the ranking for treatments and individual animals differed using the GF system. We conclude that under our conditions of use the GF system was unable to detectsignificant treatment and individual animal differences in methane emissions that were identified using both RC and SF6techniques, in part due to limited numbers and timing ofmeasurements obtained. Our data suggest that successful use of the GF system is reliant on the number and timing of measurements obtained relative to diurnal patterns of methane emission.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. Such a timing mismatch may cause rank deficiency of the conventional space-time codes and, thus, performance degradation. One efficient way to overcome such an issue is the delay-tolerant space-time codes (DT-STCs). The existing DT-STCs are designed assuming that the transmitter has no knowledge about the channels. In this paper, we show how the performance of DT-STCs can be improved by utilizing some feedback information. A general framework for designing DT-STC with limited feedback is first proposed, allowing for flexible system parameters such as the number of transmit/receive antennas, modulated symbols, and the length of codewords. Then, a new design method is proposed by combining Lloyd's algorithm and the stochastic gradient-descent algorithm to obtain optimal codebook of STCs, particularly for systems with linear minimum-mean-square-error receiver. Finally, simulation results confirm the performance of the newly designed DT-STCs with limited feedback.
Resumo:
A low cost, compact embedded design approach for actuating soft robots is presented. The complete fabrication procedure and mode of operation was demonstrated, and the performance of the complete system was also demonstrated by building a microcontroller based hardware system which was used to actuate a soft robot for bending motion. The actuation system including the electronic circuit board and actuation components was embedded in a 3D-printed casing to ensure a compact approach for actuating soft robots. Results show the viability of the system in actuating and controlling siliconebased soft robots to achieve bending motions. Qualitative measurements of uniaxial tensile test, bending distance and pressure were obtained. This electronic design is easy to reproduce and integrate into any specified soft robotic device requiring pneumatic actuation.
Resumo:
We design consistent discontinuous Galerkin finite element schemes for the approximation of the Euler-Korteweg and the Navier-Stokes-Korteweg systems. We show that the scheme for the Euler-Korteweg system is energy and mass conservative and that the scheme for the Navier-Stokes-Korteweg system is mass conservative and monotonically energy dissipative. In this case the dissipation is isolated to viscous effects, that is, there is no numerical dissipation. In this sense the methods are consistent with the energy dissipation of the continuous PDE systems. - See more at: http://www.ams.org/journals/mcom/2014-83-289/S0025-5718-2014-02792-0/home.html#sthash.rwTIhNWi.dpuf
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.