811 resultados para adaptive thermal comfort models
Resumo:
Compact thermal-fluid systems are found in many industries from aerospace to microelectronics where a combination of small size, light weight, and high surface area to volume ratio fluid networks are necessary. These devices are typically designed with fluid networks consisting of many small parallel channels that effectively pack a large amount of heat transfer surface area in a very small volume but do so at the cost of increased pumping power requirements. ^ To offset this cost the use of a branching fluid network for the distribution of coolant within a heat sink is investigated. The goal of the branch design technique is to minimize the entropy generation associated with the combination of viscous dissipation and convection heat transfer experienced by the coolant in the heat sink while maintaining compact high heat transfer surface area to volume ratios. ^ The derivation of Murray's Law, originally developed to predict the geometry of physiological transport systems, is extended to heat sink designs which minimze entropy generation. Two heat sink designs at different scales are built, and tested experimentally and analytically. The first uses this new derivation of Murray's Law. The second uses a combination of Murray's Law and Constructal Theory. The results of the experiments were used to verify the analytical and numerical models. These models were then used to compare the performance of the heat sink with other compact high performance heat sink designs. The results showed that the techniques used to design branching fluid networks significantly improves the performance of active heat sinks. The design experience gained was then used to develop a set of geometric relations which optimize the heat transfer to pumping power ratio of a single cooling channel element. Each element can be connected together using a set of derived geometric guidelines which govern branch diameters and angles. The methodology can be used to design branching fluid networks which can fit any geometry. ^
Resumo:
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.
Resumo:
The estimation of pavement layer moduli through the use of an artificial neural network is a new concept which provides a less strenuous strategy for backcalculation procedures. Artificial Neural Networks are biologically inspired models of the human nervous system. They are specifically designed to carry out a mapping characteristic. This study demonstrates how an artificial neural network uses non-destructive pavement test data in determining flexible pavement layer moduli. The input parameters include plate loadings, corresponding sensor deflections, temperature of pavement surface, pavement layer thicknesses and independently deduced pavement layer moduli.
Resumo:
While the robots gradually become a part of our daily lives, they already play vital roles in many critical operations. Some of these critical tasks include surgeries, battlefield operations, and tasks that take place in hazardous environments or distant locations such as space missions. In most of these tasks, remotely controlled robots are used instead of autonomous robots. This special area of robotics is called teleoperation. Teleoperation systems must be reliable when used in critical tasks; hence, all of the subsystems must be dependable even under a subsystem or communication line failure. These systems are categorized as unilateral or bilateral teleoperation. A special type of bilateral teleoperation is described as force-reflecting teleoperation, which is further investigated as limited- and unlimited-workspace teleoperation. Teleoperation systems configured in this study are tested both in numerical simulations and experiments. A new method, Virtual Rapid Robot Prototyping, is introduced to create system models rapidly and accurately. This method is then extended to configure experimental setups with actual master systems working with system models of the slave robots accompanied with virtual reality screens as well as the actual slaves. Fault-tolerant design and modeling of the master and slave systems are also addressed at different levels to prevent subsystem failure. Teleoperation controllers are designed to compensate for instabilities due to communication time delays. Modifications to the existing controllers are proposed to configure a controller that is reliable in communication line failures. Position/force controllers are also introduced for master and/or slave robots. Later, controller architecture changes are discussed in order to make these controllers dependable even in systems experiencing communication problems. The customary and proposed controllers for teleoperation systems are tested in numerical simulations on single- and multi-DOF teleoperation systems. Experimental studies are then conducted on seven different systems that included limited- and unlimited-workspace teleoperation to verify and improve simulation studies. Experiments of the proposed controllers were successful relative to the customary controllers. Overall, by employing the fault-tolerance features and the proposed controllers, a more reliable teleoperation system is possible to design and configure which allows these systems to be used in a wider range of critical missions.
Resumo:
Microelectronic systems are multi-material, multi-layer structures, fabricated and exposed to environmental stresses over a wide range of temperatures. Thermal and residual stresses created by thermal mismatches in films and interconnections are a major cause of failure in microelectronic devices. Due to new device materials, increasing die size and the introduction of new materials for enhanced thermal management, differences in thermal expansions of various packaging materials have become exceedingly important and can no longer be neglected. X-ray diffraction is an analytical method using a monochromatic characteristic X-ray beam to characterize the crystal structure of various materials, by measuring the distances between planes in atomic crystalline lattice structures. As a material is strained, this interplanar spacing is correspondingly altered, and this microscopic strain is used to determine the macroscopic strain. This thesis investigates and describes the theory and implementation of X-ray diffraction in the measurement of residual thermal strains. The design of a computer controlled stress attachment stage fully compatible with an Anton Paar heat stage will be detailed. The stress determined by the diffraction method will be compared with bimetallic strip theory and finite element models.
Resumo:
My thesis examines fine-scale habitat use and movement patterns of age 1 Greenland cod (Gadus macrocephalus ogac) tracked using acoustic telemetry. Recent advances in tracking technologies such as GPS and acoustic telemetry have led to increasingly large and detailed datasets that present new opportunities for researchers to address fine-scale ecological questions regarding animal movement and spatial distribution. There is a growing demand for home range models that will not only work with massive quantities of autocorrelated data, but that can also exploit the added detail inherent in these high-resolution datasets. Most published home range studies use radio-telemetry or satellite data from terrestrial mammals or avian species, and most studies that evaluate the relative performance of home range models use simulated data. In Chapter 2, I used actual field-collected data from age-1 Greenland cod tracked with acoustic telemetry to evaluate the accuracy and precision of six home range models: minimum convex polygons, kernel densities with plug-in bandwidth selection and the reference bandwidth, adaptive local convex hulls, Brownian bridges, and dynamic Brownian bridges. I then applied the most appropriate model to two years (2010-2012) of tracking data collected from 82 tagged Greenland cod tracked in Newman Sound, Newfoundland, Canada, to determine diel and seasonal differences in habitat use and movement patterns (Chapter 3). Little is known of juvenile cod ecology, so resolving these relationships will provide valuable insight into activity patterns, habitat use, and predator-prey dynamics, while filling a knowledge gap regarding the use of space by age 1 Greenland cod in a coastal nursery habitat. By doing so, my thesis demonstrates an appropriate technique for modelling the spatial use of fish from acoustic telemetry data that can be applied to high-resolution, high-frequency tracking datasets collected from mobile organisms in any environment.
Resumo:
The rainbow smelt (Osmerus mordax) is an anadromous teleost that produces type II antifreeze protein (AFP) and accumulates modest urea and high glycerol levels in plasma and tissues as adaptive cryoprotectant mechanisms in sub-zero temperatures. It is known that glyceroneogenesis occurs in liver via a branch in glycolysis and gluconeogenesis and is activated by low temperature; however, the precise mechanisms of glycerol synthesis and trafficking in smelt remain to be elucidated. The objective of this thesis was to provide further insight using functional genomic techniques [e.g. suppression subtractive hybridization (SSH) cDNA library construction, microarray analyses] and molecular analyses [e.g. cloning, quantitative reverse transcription - polymerase chain reaction (QPCR)]. Novel molecular mechanisms related to glyceroneogenesis were deciphered by comparing the transcript expression profiles of glycerol (cold temperature) and non-glycerol (warm temperature) accumulating hepatocytes (Chapter 2) and livers from intact smelt (Chapter 3). Briefly, glycerol synthesis can be initiated from both amino acids and carbohydrate; however carbohydrate appears to be the preferred source when it is readily available. In glycerol accumulating hepatocytes, levels of the hepatic glucose transporter (GLUT2) plummeted and transcript levels of a suite of genes (PEPCK, MDH2, AAT2, GDH and AQP9) associated with the mobilization of amino acids to fuel glycerol synthesis were all transiently higher. In contrast, in glycerol accumulating livers from intact smelt, glycerol synthesis was primarily fuelled by glycogen degradation with higher PGM and PFK (glycolysis) transcript levels. Whether initiated from amino acids or carbohydrate, there were common metabolic underpinnings. Increased PDK2 (an inhibitor of PDH) transcript levels would direct pyruvate derived from amino acids and / or DHAP derived from G6P to glycerol as opposed to oxidation via the citric acid cycle. Robust LIPL (triglyceride catabolism) transcript levels would provide free fatty acids that could be oxidized to fuel ATP synthesis. Increased cGPDH (glyceroneogenesis) transcript levels were not required for increased glycerol production, suggesting that regulation is more likely by post-translational modification. Finally, levels of a transcript potentially encoding glycerol-3-phosphatase, an enzyme not yet characterized in any vertebrate species, were transiently higher. These comparisons also led to the novel discoveries that increased G6Pase (glucose synthesis) and increased GS (glutamine synthesis) transcript levels were part of the low temperature response in smelt. Glucose may provide increased colligative protection against freezing; whereas glutamine could serve to store nitrogen released from amino acid catabolism in a non-toxic form and / or be used to synthesize urea via purine synthesis-uricolysis. Novel key aspects of cryoprotectant osmolyte (glycerol and urea) trafficking were elucidated by cloning and characterizing three aquaglyceroporin (GLP)-encoding genes from smelt at the gene and cDNA levels in Chapter 4. GLPs are integral membrane proteins that facilitate passive movement of water, glycerol and urea across cellular membranes. The highlight was the discovery that AQP10ba transcript levels always increase in posterior kidney only at low temperature. This AQP10b gene paralogue may have evolved to aid in the reabsorption of urea from the proximal tubule. This research has contributed significantly to a general understanding of the cold adaptation response in smelt, and more specifically to the development of a working scenario for the mechanisms involved in glycerol synthesis and trafficking in this species.
Resumo:
Acknowledgements This work received funding from the MASTS pooling initiative (The Marine Alliance for Science and Technology for Scotland) and their support is gratefully acknowledged. MASTS is funded by the Scottish Funding Council (Grant reference HR09011) and contributing institutions.
Resumo:
Acknowledgements This work received funding from the MASTS pooling initiative (The Marine Alliance for Science and Technology for Scotland) and their support is gratefully acknowledged. MASTS is funded by the Scottish Funding Council (Grant reference HR09011) and contributing institutions.
Resumo:
Energy efficiency and user comfort have recently become priorities in the Facility Management (FM) sector. This has resulted in the use of innovative building components, such as thermal solar panels, heat pumps, etc., as they have potential to provide better performance, energy savings and increased user comfort. However, as the complexity of components increases, the requirement for maintenance management also increases. The standard routine for building maintenance is inspection which results in repairs or replacement when a fault is found. This routine leads to unnecessary inspections which have a cost with respect to downtime of a component and work hours. This research proposes an alternative routine: performing building maintenance at the point in time when the component is degrading and requires maintenance, thus reducing the frequency of unnecessary inspections. This thesis demonstrates that statistical techniques can be used as part of a maintenance management methodology to invoke maintenance before failure occurs. The proposed FM process is presented through a scenario utilising current Building Information Modelling (BIM) technology and innovative contractual and organisational models. This FM scenario supports a Degradation based Maintenance (DbM) scheduling methodology, implemented using two statistical techniques, Particle Filters (PFs) and Gaussian Processes (GPs). DbM consists of extracting and tracking a degradation metric for a component. Limits for the degradation metric are identified based on one of a number of proposed processes. These processes determine the limits based on the maturity of the historical information available. DbM is implemented for three case study components: a heat exchanger; a heat pump; and a set of bearings. The identified degradation points for each case study, from a PF, a GP and a hybrid (PF and GP combined) DbM implementation are assessed against known degradation points. The GP implementations are successful for all components. For the PF implementations, the results presented in this thesis find that the extracted metrics and limits identify degradation occurrences accurately for components which are in continuous operation. For components which have seasonal operational periods, the PF may wrongly identify degradation. The GP performs more robustly than the PF, but the PF, on average, results in fewer false positives. The hybrid implementations, which are a combination of GP and PF results, are successful for 2 of 3 case studies and are not affected by seasonal data. Overall, DbM is effectively applied for the three case study components. The accuracy of the implementations is dependant on the relationships modelled by the PF and GP, and on the type and quantity of data available. This novel maintenance process can improve equipment performance and reduce energy wastage from BSCs operation.
Resumo:
Invasive species allow an investigation of trait retention and adaptations after exposure to new habitats. Recent work on corals from the Gulf of Aqaba (GoA) shows that tolerance to high temperature persists thousands of years after invasion, without any apparent adaptive advantage. Here we test whether thermal tolerance retention also occurs in another symbiont-bearing calcifying organism. To this end, we investigate the thermal tolerance of the benthic foraminifera Amphistegina lobifera from the GoA (29° 30.14167 N 34° 55.085 E) and compare it to a recent "Lessepsian invader population" from the Eastern Mediterranean (EaM) (32° 37.386 N, 34°55.169 E). We first established that the studied populations are genetically homogenous but distinct from a population in Australia, and that they contain a similar consortium of diatom symbionts, confirming their recent common descent. Thereafter, we exposed specimens from GoA and EaM to elevated temperatures for three weeks and monitored survivorship, growth rates and photophysiology. Both populations exhibited a similar pattern of temperature tolerance. A consistent reduction of photosynthetic dark yields was observed at 34°C and reduced growth was observed at 32°C. The apparent tolerance to sustained exposure to high temperature cannot have a direct adaptive importance, as peak summer temperatures in both locations remain <32°C. Instead, it seems that in the studied foraminifera tolerance to high temperature is a conservative trait and the EaM population retained this trait since its recent invasion. Such pre-adaptation to higher temperatures confers A. lobifera a clear adaptive advantage in shallow and episodically high temperature environments in the Mediterranean under further warming.
Resumo:
Thirty-four sediment and mudline temperatures were collected from six drill holes on ODP Leg 110 near the toe of the Barbados accretionary complex. When combined with thermal conductivity measurements these data delineate the complicated thermal structure on the edge of this convergent margin. Surface heat-flow values from Leg 110 (calculated from geothermal gradients forced through the bottom-water temperature at mudline) of 92 to 192 mW/m**2 are 80% to 300% higher than values predicted by standard heat flow vs. age models for oceanic crust, but are compatible with earlier surface measurements made at the same latitude. Measured heat flow tends to decrease downhole at four sites, suggesting the presence of heat sources within the sediments. These results are consistent with the flow of warm fluid through the complex along sub-horizontal, high-permeability conduits, including thrust faults, the major decollement zone, and sandy intervals. Simple calculations suggest that this flow is transient, occurring on time scales of tens to tens of thousands of years. High heat flow in the vicinity of 15°30'N and not elsewhere along the deformation front suggests that the Leg 110 drill sites may be situated over a fluid discharge zone, with dewatering more active here than elsewhere along the accretionary complex.
Resumo:
Within Canada there are more than 2.5 million bundles of spent nuclear fuel with another approximately 2 million bundles to be generated in the future. Canada, and every country around the world that has taken a decision on management of spent nuclear fuel, has decided on long-term containment and isolation of the fuel within a deep geological repository. At depth, a deep geological repository consists of a network of placement rooms where the bundles will be located within a multi-layered system that incorporates engineered and natural barriers. The barriers will be placed in a complex thermal-hydraulic-mechanical-chemical-biological (THMCB) environment. A large database of material properties for all components in the repository are required to construct representative models. Within the repository, the sealing materials will experience elevated temperatures due to the thermal gradient produced by radioactive decay heat from the waste inside the container. Furthermore, high porewater pressure due to the depth of repository along with possibility of elevated salinity of groundwater would cause the bentonite-based materials to be under transient hydraulic conditions. Therefore it is crucial to characterize the sealing materials over a wide range of thermal-hydraulic conditions. A comprehensive experimental program has been conducted to measure properties (mainly focused on thermal properties) of all sealing materials involved in Mark II concept at plausible thermal-hydraulic conditions. The thermal response of Canada’s concept for a deep geological repository has been modelled using experimentally measured thermal properties. Plausible scenarios are defined and the effects of these scenarios are examined on the container surface temperature as well as the surrounding geosphere to assess whether they meet design criteria for the cases studied. The thermal response shows that if all the materials even being at dried condition, repository still performs acceptably as long as sealing materials remain in contact.
Resumo:
Ground-source heat pump (GSHP) systems represent one of the most promising techniques for heating and cooling in buildings. These systems use the ground as a heat source/sink, allowing a better efficiency thanks to the low variations of the ground temperature along the seasons. The ground-source heat exchanger (GSHE) then becomes a key component for optimizing the overall performance of the system. Moreover, the short-term response related to the dynamic behaviour of the GSHE is a crucial aspect, especially from a regulation criteria perspective in on/off controlled GSHP systems. In this context, a novel numerical GSHE model has been developed at the Instituto de Ingeniería Energética, Universitat Politècnica de València. Based on the decoupling of the short-term and the long-term response of the GSHE, the novel model allows the use of faster and more precise models on both sides. In particular, the short-term model considered is the B2G model, developed and validated in previous research works conducted at the Instituto de Ingeniería Energética. For the long-term, the g-function model was selected, since it is a previously validated and widely used model, and presents some interesting features that are useful for its combination with the B2G model. The aim of the present paper is to describe the procedure of combining these two models in order to obtain a unique complete GSHE model for both short- and long-term simulation. The resulting model is then validated against experimental data from a real GSHP installation.