934 resultados para dynamic methods
Resumo:
Air distribution systems are one of the major electrical energy consumers in air-conditioned commercial buildings which maintain comfortable indoor thermal environment and air quality by supplying specified amounts of treated air into different zones. The sizes of air distribution lines affect energy efficiency of the distribution systems. Equal friction and static regain are two well-known approaches for sizing the air distribution lines. Concerns to life cycle cost of the air distribution systems, T and IPS methods have been developed. Hitherto, all these methods are based on static design conditions. Therefore, dynamic performance of the system has not been yet addressed; whereas, the air distribution systems are mostly performed in dynamic rather than static conditions. Besides, none of the existing methods consider any aspects of thermal comfort and environmental impacts. This study attempts to investigate the existing methods for sizing of the air distribution systems and proposes a dynamic approach for size optimisation of the air distribution lines by taking into account optimisation criteria such as economic aspects, environmental impacts and technical performance. These criteria have been respectively addressed through whole life costing analysis, life cycle assessment and deviation from set-point temperature of different zones. Integration of these criteria into the TRNSYS software produces a novel dynamic optimisation approach for duct sizing. Due to the integration of different criteria into a well- known performance evaluation software, this approach could be easily adopted by designers in busy nature of design. Comparison of this integrated approach with the existing methods reveals that under the defined criteria, system performance is improved up to 15% compared to the existing methods. This approach is interpreted as a significant step forward reaching to the net zero emission building in future.
Resumo:
Visual Telepresence system which utilize virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the display is fixed and is most suitable only for viewing elements of a scene at a particular distance. In such a system, the operator's ability to gaze around without use of head movement is severely limited. A trade off must be made between a poor viewing resolution or a narrow width of viewing field. To address these limitations a prototype system where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator has been developed. This paper explores the reasons why is necessary to actively adjust both the display system and the cameras and furthermore justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms, An assessment of the performance of the system against a fixed camera/display system when operators are assigned basic tasks involving depth and distance/size perception. The sensitivity to variations in transient performance of the display and camera vergence is also assessed.
Resumo:
Three methods for intercalibrating humidity sounding channels are compared to assess their merits and demerits. The methods use the following: (1) natural targets (Antarctica and tropical oceans), (2) zonal average brightness temperatures, and (3) simultaneous nadir overpasses (SNOs). Advanced Microwave Sounding Unit-B instruments onboard the polar-orbiting NOAA 15 and NOAA 16 satellites are used as examples. Antarctica is shown to be useful for identifying some of the instrument problems but less promising for intercalibrating humidity sounders due to the large diurnal variations there. Owing to smaller diurnal cycles over tropical oceans, these are found to be a good target for estimating intersatellite biases. Estimated biases are more resistant to diurnal differences when data from ascending and descending passes are combined. Biases estimated from zonal-averaged brightness temperatures show large seasonal and latitude dependence which could have resulted from diurnal cycle aliasing and scene-radiance dependence of the biases. This method may not be the best for channels with significant surface contributions. We have also tested the impact of clouds on the estimated biases and found that it is not significant, at least for tropical ocean estimates. Biases estimated from SNOs are the least influenced by diurnal cycle aliasing and cloud impacts. However, SNOs cover only relatively small part of the dynamic range of observed brightness temperatures.
Resumo:
Neurovascular coupling in response to stimulation of the rat barrel cortex was investigated using concurrent multichannel electrophysiology and laser Doppler flowmetry. The data were used to build a linear dynamic model relating neural activity to blood flow. Local field potential time series were subject to current source density analysis, and the time series of a layer IV sink of the barrel cortex was used as the input to the model. The model output was the time series of the changes in regional cerebral blood flow (CBF). We show that this model can provide excellent fit of the CBF responses for stimulus durations of up to 16 s. The structure of the model consisted of two coupled components representing vascular dilation and constriction. The complex temporal characteristics of the CBF time series were reproduced by the relatively simple balance of these two components. We show that the impulse response obtained under the 16-s duration stimulation condition generalised to provide a good prediction to the data from the shorter duration stimulation conditions. Furthermore, by optimising three out of the total of nine model parameters, the variability in the data can be well accounted for over a wide range of stimulus conditions. By establishing linearity, classic system analysis methods can be used to generate and explore a range of equivalent model structures (e.g., feed-forward or feedback) to guide the experimental investigation of the control of vascular dilation and constriction following stimulation.
Resumo:
Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.
Resumo:
A discrete-time random process is described, which can generate bursty sequences of events. A Bernoulli process, where the probability of an event occurring at time t is given by a fixed probability x, is modified to include a memory effect where the event probability is increased proportionally to the number of events that occurred within a given amount of time preceding t. For small values of x the interevent time distribution follows a power law with exponent −2−x. We consider a dynamic network where each node forms, and breaks connections according to this process. The value of x for each node depends on the fitness distribution, \rho(x), from which it is drawn; we find exact solutions for the expectation of the degree distribution for a variety of possible fitness distributions, and for both cases where the memory effect either is, or is not present. This work can potentially lead to methods to uncover hidden fitness distributions from fast changing, temporal network data, such as online social communications and fMRI scans.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.
Resumo:
The traditional reduction methods to represent the fusion cross sections of different systems are flawed when attempting to completely eliminate the geometrical aspects, such as the heights and radii of the barriers, and the static effects associated with the excess neutrons or protons in weakly bound nuclei. We remedy this by introducing a new dimensionless universal function, which allows the separation and disentanglement of the static and dynamic aspects of the breakup coupling effects connected with the excess nucleons. Applying this new reduction procedure to fusion data of several weakly bound systems, we find a systematic suppression of complete fusion above the Coulomb barrier and enhancement below it. Different behaviors are found for the total fusion cross sections. They are appreciably suppressed in collisions of neutron-halo nuclei, while they are practically not affected by the breakup coupling in cases of stable weakly bound nuclei. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A structure-dynamic approach to cortical systems is reported which is based on the number of paths and the accessibility of each node. The latter measurement is obtained by performing self-avoiding random walks in the respective networks, so as to simulate dynamics, and then calculating the entropies of the transition probabilities for walks starting from each node. Cortical networks of three species, namely cat, macaque and humans, are studied considering structural and dynamical aspects. It is verified that the human cortical network presents the highest accessibility and number of paths (in terms of z-scores). The correlation between the number of paths and accessibility is also investigated as a mean to quantify the level of independence between paths connecting pairs of nodes in cortical networks. By comparing the cortical networks of cat, macaque and humans, it is verified that the human cortical network tends to present the largest number of independent paths of length larger than four. These results suggest that the human cortical network is potentially the most resilient to brain injures. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We present a minor but essential modification to the CODEX 1D-MAS exchange experiment. The new CONTRA method, which requires minor changes of the original sequence only, has advantages over the previously introduced S-CODEX, since it is less sensitive to artefacts caused by finite pulse lengths. The performance of this variant, including the finite pulse effect, was confirmed by SIMPSON calculations and demonstrated on a number of dynamic systems. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
The fluorescence quenching kinetics of two porphyrin dendrimer series (GnTPPH(2) and GnPZn) by different type of quenchers is reported. The microenvironment surrounding the core in GnPZn was probing by core-quencher interactions using benzimidazole. The dependence of quencher binding constant (K(a) ) on generation indicates the presence of a weak interaction between branches and the core of the porphyrin dendrimer. The similar free volume in dendrimers of third and fourth generation suggests that structural collapse in high generations occurs by packing of the dendrimer peripheral layer. Dynamic fluorescence quenching of the porphyrin core by 1,3-dicyanomethylene-2-methyl-2-pentyl-indan (PDCMI) in GnTPPH(2) is a distance dependent electron transfer process with an exponential attenuation factor beta=0.33 angstrom(-1). The quenching by 1,2-dibromobenzene occurs by diffusion process of the quencher toward to the porphyrin core, and its rate constant is practically independent of dendrimer generation.
Resumo:
A variety of substrates have been used for fabrication of microchips for DNA extraction, PCR amplification, and DNA fragment separation, including the more conventional glass and silicon as well as alternative polymer-based materials. Polyester represents one such polymer, and the laser-printing of toner onto polyester films has been shown to be effective for generating polyester-toner (PeT) microfluidic devices with channel depths on the order of tens of micrometers. Here, we describe a novel and simple process that allows for the production of multilayer, high aspect-ratio PeT microdevices with substantially larger channel depths. This innovative process utilizes a CO(2) laser to create the microchannel in polyester sheets containing a uniform layer of printed toner, and multilayer devices can easily be constructed by sandwiching the channel layer between uncoated cover sheets of polyester containing precut access holes. The process allows the fabrication of deep channels, with similar to 270 mu m, and we demonstrate the effectiveness of multilayer PeT microchips for dynamic solid phase extraction (dSPE) and PCR amplification. With the former, we found that (i) more than 65% of DNA from 0.6 mu L of blood was recovered, (ii) the resultant DNA was concentrated to greater than 3 ng/mu L., (which was better than other chip-based extraction methods), and (iii) the DNA recovered was compatible with downstream microchip-based PCR amplification. Illustrative of the compatibility of PeT microchips with the PCR process, the successful amplification of a 520 bp fragment of lambda-phage DNA in a conventional thermocycler is shown. The ability to handle the diverse chemistries associated with DNA purification and extraction is a testimony to the potential utility of PeT microchips beyond separations and presents a promising new disposable platform for genetic analysis that is low cost and easy to fabricate.
Resumo:
The possibility to compress analyte bands at the beginning of CE runs has many advantages. Analytes at low concentration can be analyzed with high signal-to-noise ratios by using the so-called sample stacking methods. Moreover, sample injections with very narrow initial band widths (small initial standard deviations) are sometimes useful, especially if high resolutions among the bands are required in the shortest run time. In the present work, a method of sample stacking is proposed and demonstrated. It is based on BGEs with high thermal sensitive pHs (high dpH/dT) and analytes with low dpK(a)/dT. High thermal sensitivity means that the working pK(a) of the BGE has a high dpK(a)/dT in modulus. For instance, Tris and Ethanolamine have dpH/dT = -0.028/degrees C and -0.029/degrees C, respectively, whereas carboxylic acids have low dpK(a)/dT values, i.e. in the -0.002/degrees C to+0.002/degrees C range. The action of cooling and heating sections along the capillary during the runs affects also the local viscosity, conductivity, and electric field strength. The effect of these variables on electrophoretic velocity and band compression is theoretically calculated using a simple model. Finally, this stacking method was demonstrated for amino acids derivatized with naphthalene-2,3-dicarboxaldehyde and fluorescamine using a temperature difference of 70 degrees C between two neighbor sections and Tris as separation buffer. In this case, the BGE has a high pH thermal coefficient whereas the carboxylic groups of the analytes have low pK(a) thermal coefficients. The application of these dynamic thermal gradients increased peak height by a factor of two (and decreased the standard deviations of peaks by a factor of two) of aspartic acid and glutamic acid derivatized with naphthalene-2,3-dicarboxaldehyde and serine derivatized with fluorescamine. The effect of thermal compression of bands was not observed when runs were accomplished using phosphate buffer at pH 7 (negative control). Phosphate has a low dpH/dT in this pH range, similar to the dK(a)/dT of analytes. It is shown that vertical bar dK(a)/dT-dpH/dT vertical bar >> 0 is one determinant factor to have significant stacking produced by dynamic thermal junctions.
Resumo:
In a previous work [M. Mandaji, et al., this issue] a sample stacking method was theoretically modeled and experimentally demonstrated for analytes with low dpK(a)/dT (analytes carrying carboxylic groups) and BGEs with high dpH/dT (high pH-temperature-coefficients). In that work, buffer pH was modulated with temperature, inducing electrophoretic mobility changes in the analytes. In the present work, the opposite conditions are studied and tested, i.e. analytes with high dpK(a)/dT and BGEs that exhibit low dpH/dT. It is well known that organic bases such as amines, imidazoles, and benzimidazoles exhibit high dpK(a)/dT. Temperature variations induce instantaneous changes on the basicity of these and other basic groups. Therefore, the electrophoretic velocity of some analytes changes abruptly when temperature variations are applied along the capillary. This is true only if BGE pH remains constant or if it changes in the opposite direction of pK(a) of the analyte. The presence of hot and cold sections along the capillary also affects local viscosity, conductivity, and electric field strength. The effect of these variables on electrophoretic velocity and band stacking efficacy was also taken into account in the theoretical model presented. Finally, this stacking method is demonstrated for lysine partially derivatized with naphthalene-2,3-dicarboxaldehyde. In this case, the amino group of the lateral chain was left underivatized and only the alpha amino group was derivatized. Therefore, the basicity of the lateral amino group, and consequently the electrophoretic mobility, was modulated with temperature while the pH of the buffer used remained unchanged.