907 resultados para High Lift Systems Design
Resumo:
Salinity gradient power is proposed as a source of renewable energy when two solutions of different salinity are mixed. In particular, Pressure Retarded Osmosis (PRO) coupled with a Reverse Osmosis process (RO) has been previously suggested for power generation, using RO brine as the draw solution. However, integration of PRO with RO may have further value for increasing the extent of water recovery in a desalination process. Consequently, this study was designed to model the impact of various system parameters to better understand how to design and operate practical PRO-RO units. The impact of feed salinity and recovery rate for the RO process on the concentration of draw solution, feed pressure, and membrane area of the PRO process was evaluated. The PRO system was designed to operate at maximum power density of . Model results showed that the PRO power density generated intensified with increasing seawater salinity and RO recovery rate. For an RO process operating at 52% recovery rate and 35 g/L feed salinity, a maximum power density of 24 W/m2 was achieved using 4.5 M NaCl draw solution. When seawater salinity increased to 45 g/L and the RO recovery rate was 46%, the PRO power density increased to 28 W/m2 using 5 M NaCl draw solution. The PRO system was able to increase the recovery rate of the RO by up to 18% depending on seawater salinity and RO recovery rate. This result suggested a potential advantage of coupling PRO process with RO system to increase the recovery rate of the desalination process and reduce brine discharge.
Resumo:
This article addresses the problem of how to select the optimal combination of sensors and how to determine their optimal placement in a surveillance region in order to meet the given performance requirements at a minimal cost for a multimedia surveillance system. We propose to solve this problem by obtaining a performance vector, with its elements representing the performances of subtasks, for a given input combination of sensors and their placement. Then we show that the optimal sensor selection problem can be converted into the form of Integer Linear Programming problem (ILP) by using a linear model for computing the optimal performance vector corresponding to a sensor combination. Optimal performance vector corresponding to a sensor combination refers to the performance vector corresponding to the optimal placement of a sensor combination. To demonstrate the utility of our technique, we design and build a surveillance system consisting of PTZ (Pan-Tilt-Zoom) cameras and active motion sensors for capturing faces. Finally, we show experimentally that optimal placement of sensors based on the design maximizes the system performance.
Resumo:
Dispersing a data object into a set of data shares is an elemental stage in distributed communication and storage systems. In comparison to data replication, data dispersal with redundancy saves space and bandwidth. Moreover, dispersing a data object to distinct communication links or storage sites limits adversarial access to whole data and tolerates loss of a part of data shares. Existing data dispersal schemes have been proposed mostly based on various mathematical transformations on the data which induce high computation overhead. This paper presents a novel data dispersal scheme where each part of a data object is replicated, without encoding, into a subset of data shares according to combinatorial design theory. Particularly, data parts are mapped to points and data shares are mapped to lines of a projective plane. Data parts are then distributed to data shares using the point and line incidence relations in the plane so that certain subsets of data shares collectively possess all data parts. The presented scheme incorporates combinatorial design theory with inseparability transformation to achieve secure data dispersal at reduced computation, communication and storage costs. Rigorous formal analysis and experimental study demonstrate significant cost-benefits of the presented scheme in comparison to existing methods.
Resumo:
Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.
Resumo:
In this paper, we present a low-complexity algorithm for detection in high-rate, non-orthogonal space-time block coded (STBC) large-multiple-input multiple-output (MIMO) systems that achieve high spectral efficiencies of the order of tens of bps/Hz. We also present a training-based iterative detection/channel estimation scheme for such large STBC MIMO systems. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed multistage likelihood ascent search (M-LAS) detector in conjunction with the proposed iterative detection/channel estimation scheme at low complexities. The fact that we could show such good results for large STBCs like 16 X 16 and 32 X 32 STBCs from Cyclic Division Algebras (CDA) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot based training for channel estimation and turbo coding) establishes the effectiveness of the proposed detector and channel estimator. We decode perfect codes of large dimensions using the proposed detector. With the feasibility of such a low-complexity detection/channel estimation scheme, large-MIMO systems with tens of antennas operating at several tens of bps/Hz spectral efficiencies can become practical, enabling interesting high data rate wireless applications.
Resumo:
The fluorescence properties of a homologous series of fluorescent alkylamines are described. The binding of the probes to crythrocyte membranes increases with the length of the alkyl chain. The probes are shown to interact more strongly with membranes than with protein and lipid model systems. The binding of the probes to the membrane is sensitive to the cation concentration of the medium.
Resumo:
The lead-acid battery is often the weakest link in photovoltaic (PV) installations. Accordingly, various versions of lead-acid batteries, namely flooded, gelled, absorbent glass-mat and hybrid, have been assembled and performance tested for a PV stand-alone lighting system. The study suggests the hybrid VRLA batteries, which exhibit both the high power density of absorbent glass-mat design and the improved thermal properties of the gel design, to be appropriate for such an application. Among the VRLA-type batteries studied here water loss for the hybrid VRLA batteries is minimal and charge-acceptance during the service at high temperatures is better in relation to their AGM counterparts.
Resumo:
In this paper an approach for obtaining depth and section modulus of the cantilever sheet pile wall using inverse reliability method is described. The proposed procedure employs inverse first order reliability method to obtain the design penetration depth and section modulus of the steel sheet pile wall in order that the reliability of the wall against failure modes must meet a desired level of safety. Sensitivity analysis is conducted to assess the effect of uncertainties in design parameters on the reliability of cantilever sheet pile walls. The analysis is performed by treating back fill soil properties, depth of the water table from the top of the sheet pile wall, yield strength of steel and section modulus of steel pile as random variables. Two limit states, viz., rotational and flexural failure of sheet pile wall are considered. The results using this approach are used to develop a set of reliability based design charts for different coefficients of variation of friction angle of the backfill (5%, 10% and 15%). System reliability considerations in terms of series and parallel systems are also studied.
Resumo:
Considering the staggering benefits of high-performance schools, it seems an obvious choice to “go green.” High-performance schools offer an exceptionally cost-effective means to enhance student learning, using on average 33 percent less energy than conventionally designed schools, and provide substantial health gains, including reduced respiratory problems and absenteeism. According to the 2006 study, Greening America's Schools, Costs and Benefits, co-sponsored by the American Institute of Architects (AIA) and Capital E, a green building consulting firm, high-performance lighting is a key element of healthy learning environments, contributing to improved test scores, reduced off-task behavior, and higher achievement among students. Few argue this point more convincingly than architect Heinz Rudolf, of Portland-Oregon-based Boora Architects, who has designed sustainable schools for more than 80 school districts in Oregon, Washington, Colorado, and Wyoming, and has pioneered the high-performance school movement. Boora's recently completed project, the Baker Prairie Middle School in Canby, Oregon is one of the most sustainable K-12 facilities in the state, and illustrates Rudolf's progressive and research-intensive approach to school design.
Resumo:
The current approach for protecting the receiving water environment from urban stormwater pollution is the adoption of structural measures commonly referred to as Water Sensitive Urban Design (WSUD). The treatment efficiency of WSUD measures closely depends on the design of the specific treatment units. As stormwater quality is influenced by rainfall characteristics, the selection of appropriate rainfall events for treatment design is essential to ensure the effectiveness of WSUD systems. Based on extensive field investigations in four urban residential catchments based at Gold Coast, Australia, and computer modelling, this paper details a technically robust approach for the selection of rainfall events for stormwater treatment design using a three-component model. The modelling results confirmed that high intensity-short duration events produce 58.0% of TS load while they only generated 29.1% of total runoff volume. Additionally, rainfall events smaller than 6-month average recurrence interval (ARI) generates a greater cumulative runoff volume (68.4% of the total annual runoff volume) and TS load (68.6% of the TS load exported) than the rainfall events larger than 6-month ARI. The results suggest that for the study catchments, stormwater treatment design could be based on the rainfall which had a mean value of 31 mm/h average intensity and 0.4 h duration. These outcomes also confirmed that selecting smaller ARI rainfall events with high intensity-short duration as the threshold for treatment system design is the most feasible approach since these events cumulatively generate a major portion of the annual pollutant load compared to the other types of events, despite producing a relatively smaller runoff volume. This implies that designs based on small and more frequent rainfall events rather than larger rainfall events would be appropriate in the context of efficiency in treatment performance, cost-effectiveness and possible savings in land area needed.
Resumo:
FET based MEMS microphones comprise of a flexible diaphragm that works as the moving gate of the transistor. The integrated electromechanical transducer can be made more sensitive to external sound pressure either by increasing the mechanical or the electrical sensitivities. We propose a method of increasing the overall sensitivity of the microphone by increasing its electrical sensitivity. The proposed microphone uses the transistor biased in the sub-threshold region where the drain current depends exponentially on the difference between the gate-to-source voltage and the threshold voltage. The device is made more sensitive without adding any complexity in the mechanical design of the diaphragm.
Resumo:
Flexible constraint length channel decoders are required for software defined radios. This paper presents a novel scalable scheme for realizing flexible constraint length Viterbi decoders on a de Bruijn interconnection network. Architectures for flexible decoders using the flattened butterfly and shuffle-exchange networks are also described. It is shown that these networks provide favourable substrates for realizing flexible convolutional decoders. Synthesis results for the three networks are provided and a comparison is performed. An architecture based on a 2D-mesh, which is a topology having a nominally lesser silicon area requirement, is also considered as a fourth point for comparison. It is found that of all the networks considered, the de Bruijn network offers the best tradeoff in terms of area versus throughput.
Resumo:
As an emerging research method that has showed promising potential in several research disciplines, simulation received relatively few attention in information systems research. This paper illustrates a framework for employing simulation to study IT value cocreation. Although previous studies identified factors driving IT value cocreation, its underlying process remains unclear. Simulation can address this limitation through exploring such underlying process with computational experiments. The simulation framework in this paper is based on an extended NK model. Agent-based modeling is employed as the theoretical basis for the NK model extensions.
Resumo:
Inductors are important energy storage elements that are used as filters in switching power converters. The operating efficiency of power inductors depend on the initial design choices and they remain as one of the most inefficient elements in a power converter. The focus of this paper is to explore the inductor design procedure from the point of efficiency and operating temperature. A modified form of the area product approach is used as starting point for the inductor design. The equations which estimate the power loss in core and copper winding are described. The surface temperature of the inductor is modelled using heat transfer equations for radiation and natural convection. All design assumptions are verified by actual experimental data and results show a good match with the analysis.