894 resultados para Numerical approximation and analysis
Resumo:
Increased device density, switching speeds of integrated circuits and decrease in package size is placing new demands for high power thermal-management. The convectional method of forced air cooling with passive heat sink can handle heat fluxes up-to 3-5W/cm2; however current microprocessors are operating at levels of 100W/cm2, This demands the usage of novel thermal-management systems. In this work, water-cooling systems with active heat sink are embedded in the substrate. The research involved fabricating LTCC substrates of various configurations - an open-duct substrate, the second with thermal vias and the third with thermal vias and free-standing metal columns and metal foil. Thermal testing was performed experimentally and these results are compared with CFD results. An overall thermal resistance for the base substrate is demonstrated to be 3.4oC/W-cm2. Addition of thermal vias reduces the effective resistance of the system by 7times and further addition of free standing columns reduced it by 20times.
Resumo:
With the growing commercial importance of the Internet and the development of new real-time, connection-oriented services like IP-telephony and electronic commerce resilience is becoming a key issue in the design of TP-based networks. Two emerging technologies, which can accomplish the task of efficient information transfer, are Multiprotocol Label Switching (MPLS) and Differentiated Services. A main benefit of MPLS is the ability to introduce traffic-engineering concepts due to its connection-oriented characteristic. With MPLS it is possible to assign different paths for packets through the network. Differentiated services divides traffic into different classes and treat them differently, especially when there is a shortage of network resources. In this thesis, a framework was proposed to integrate the above two technologies and its performance in providing load balancing and improving QoS was evaluated. Simulation and analysis of this framework demonstrated that the combination of MPLS and Differentiated services is a powerful tool for QoS provisioning in IP networks.
Resumo:
Peer reviewed
Resumo:
Cancer comprises a collection of diseases, all of which begin with abnormal tissue growth from various stimuli, including (but not limited to): heredity, genetic mutation, exposure to harmful substances, radiation as well as poor dieting and lack of exercise. The early detection of cancer is vital to providing life-saving, therapeutic intervention. However, current methods for detection (e.g., tissue biopsy, endoscopy and medical imaging) often suffer from low patient compliance and an elevated risk of complications in elderly patients. As such, many are looking to “liquid biopsies” for clues into presence and status of cancer due to its minimal invasiveness and ability to provide rich information about the native tumor. In such liquid biopsies, peripheral blood is drawn from patients and is screened for key biomarkers, chiefly circulating tumor cells (CTCs). Capturing, enumerating and analyzing the genetic and metabolomic characteristics of these CTCs may hold the key for guiding doctors to better understand the source of cancer at an earlier stage for more efficacious disease management.
The isolation of CTCs from whole blood, however, remains a significant challenge due to their (i) low abundance, (ii) lack of a universal surface marker and (iii) epithelial-mesenchymal transition that down-regulates common surface markers (e.g., EpCAM), reducing their likelihood of detection via positive selection assays. These factors potentiate the need for an improved cell isolation strategy that can collect CTCs via both positive and negative selection modalities as to avoid the reliance on a single marker, or set of markers, for more accurate enumeration and diagnosis.
The technologies proposed herein offer a unique set of strategies to focus, sort and template cells in three independent microfluidic modules. The first module exploits ultrasonic standing waves and a class of elastomeric particles for the rapid and discriminate sequestration of cells. This type of cell handling holds promise not only in sorting, but also in the isolation of soluble markers from biofluids. The second module contains components to focus (i.e., arrange) cells via forces from acoustic standing waves and separate cells in a high throughput fashion via free-flow magnetophoresis. The third module uses a printed array of micromagnets to capture magnetically labeled cells into well-defined compartments, enabling on-chip staining and single cell analysis. These technologies can operate in standalone formats, or can be adapted to operate with established analytical technologies, such as flow cytometry. A key advantage of these innovations is their ability to process erythrocyte-lysed blood in a rapid (and thus high throughput) fashion. They can process fluids at a variety of concentrations and flow rates, target cells with various immunophenotypes and sort cells via positive (and potentially negative) selection. These technologies are chip-based, fabricated using standard clean room equipment, towards a disposable clinical tool. With further optimization in design and performance, these technologies might aid in the early detection, and potentially treatment, of cancer and various other physical ailments.
Resumo:
Systematic, high-quality observations of the atmosphere, oceans and terrestrial environments are required to improve understanding of climate characteristics and the consequences of climate change. The overall aim of this report is to carry out a comparative assessment of approaches taken to addressing the state of European observations systems and related data analysis by some leading actors in the field. This research reports on approaches to climate observations and analyses in Ireland, Switzerland, Germany, The Netherlands and Austria and explores options for a more coordinated approach to national responses to climate observations in Europe. The key aspects addressed are: an assessment of approaches to develop GCOS and provision of analysis of GCOS data; an evaluation of how these countries are reporting development of GCOS; highlighting best practice in advancing GCOS implementation including analysis of Essential Climate Variables (ECVs); a comparative summary of the differences and synergies in terms of the reporting of climate observations; an overview of relevant European initiatives and recommendations on how identified gaps might be addressed in the short to medium term.
Resumo:
In combination of the advantages of both parallel mechanisms and compliant mechanisms, a compliant parallel mechanism with two rotational DOFs (degrees of freedom) is designed to meet the requirement of a lightweight and compact pan-tilt platform. Firstly, two commonly-used design methods i.e. direct substitution and FACT (Freedom and Constraint Topology) are applied to design the configuration of the pan-tilt system, and similarities and differences of the two design alternatives are compared. Then inverse kinematic analysis of the candidate mechanism is implemented by using the pseudo-rigid-body model (PRBM), and the Jacobian related to its differential kinematics is further derived to help designer realize dynamic analysis of the 8R compliant mechanism. In addition, the mechanism’s maximum stress existing within its workspace is tested by finite element analysis. Finally, a method to determine joint damping of the flexure hinge is presented, which aims at exploring the effect of joint damping on actuator selection and real-time control. To the authors’ knowledge, almost no existing literature concerns with this issue.
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.
Resumo:
Economic policy-making has long been more integrated than social policy-making in part because the statistics and much of the analysis that supports economic policy are based on a common conceptual framework – the system of national accounts. People interested in economic analysis and economic policy share a common language of communication, one that includes both concepts and numbers. This paper examines early attempts to develop a system of social statistics that would mirror the system of national accounts, particular the work on the development of social accounts that took place mainly in the 60s and 70s. It explores the reasons why these early initiatives failed but argues that the preconditions now exist to develop a new conceptual framework to support integrated social statistics – and hence a more coherent, effective social policy. Optimism is warranted for two reasons. First, we can make use of the radical transformation that has taken place in information technology both in processing data and in providing wide access to the knowledge that can flow from the data. Second, the conditions exist to begin to shift away from the straight jacket of government-centric social statistics, with its implicit assumption that governments must be the primary actors in finding solutions to social problems. By supporting the decision-making of all the players (particularly individual citizens) who affect social trends and outcomes, we can start to move beyond the sterile, ideological discussions that have dominated much social discourse in the past and begin to build social systems and structures that evolve, almost automatically, based on empirical evidence of ‘what works best for whom’. The paper describes a Canadian approach to developing a framework, or common language, to support the evolution of an integrated, citizen-centric system of social statistics and social analysis. This language supports the traditional social policy that we have today; nothing is lost. However, it also supports a quite different social policy world, one where individual citizens and families (not governments) are seen as the central players – a more empirically-driven world that we have referred to as the ‘enabling society’.
Resumo:
A small scale sample nuclear waste package, consisting of a 28 mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500 keV), with a source size of <0.5 mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30 cm2 scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10 Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned.
Resumo:
Harnessing solar energy to provide for the thermal needs of buildings is one of the most promising solutions to the global energy issue. Exploiting the additional surface area provided by the building’s façade can significantly increase the solar energy output. Developing a range of integrated and adaptable products that do not significantly affect the building’s aesthetics is vital to enabling the building integrated solar thermal market to expand and prosper. This work reviews and evaluates solar thermal facades in terms of the standard collector type, which they are based on, and their component make-up. Daily efficiency models are presented, based on a combination of the Hottel Whillier Bliss model and finite element simulation. Novel and market available solar thermal systems are also reviewed and evaluated using standard evaluation methods, based on experimentally determined parameters ISO 9806. Solar thermal collectors integrated directly into the facade benefit from the additional wall insulation at the back; displaying higher efficiencies then an identical collector offset from the facade. Unglazed solar thermal facades with high capacitance absorbers (e.g. concrete) experience a shift in peak maximum energy yield and display a lower sensitivity to ambient conditions than the traditional metallic based unglazed collectors. Glazed solar thermal facades, used for high temperature applications (domestic hot water), result in overheating of the building’s interior which can be reduced significantly through the inclusion of high quality wall insulation. For low temperature applications (preheating systems), the cheaper unglazed systems offer the most economic solution. The inclusion of brighter colour for the glazing and darker colour for the absorber shows the lowest efficiency reductions (<4%). Novel solar thermal façade solutions include solar collectors integrated into balcony rails, shading devices, louvers, windows or gutters.
Resumo:
An emergency lowering system for use in safety critical crane applications is discussed. The system is used to safely lower the payload of a crane in case of an electric blackout. The system is based on a backup power source, which is used to operate the crane while the regular supply is not available. The system enables both horizontal and vertical movements of the crane. Two different configurations for building the system are described, one with an uninterruptible power source (UPS) or a diesel generator connected in parallel to the crane’s power supply and one with a customized energy storage connected to the intermediate DC-link in the crane. In order to be able to size the backup power source, the power required during emergency lowering needs to be understood. A simulation model is used to study and optimize the power used during emergency lowering. The simulation model and optimizations are verified in a test hoist. Simulation results are presented with non-optimized and optimized controls for two example applications: a paper roll crane and a steel mill ladle crane. The optimizations are found to significantly reduce the required power for the crane movements during emergency lowering.
Resumo:
Ecosystem service assessment and management are shaped by the scale at which they are conducted; however, there has been little systematic investigation of the scales associated with ecosystem service processes, such as production, benefit distribution, and management. We examined how social-ecological spatial scale impacts ecosystem service assessment by comparing how ecosystem service distribution, trade-offs, and bundles shift across spatial scales. We used a case study in Québec, Canada, to analyze the scales of production, consumption, and management of 12 ecosystem services and to analyze how interactions among 7 of these ecosystem services change across 3 scales of observation (1, 9, and 75 km²). We found that ecosystem service patterns and interactions were relatively robust across scales of observation; however, we identified 4 different types of scale mismatches among ecosystem service production, consumption, and management. Based on this analysis, we have proposed 4 aspects of scale that ecosystem service assessments should consider.
Resumo:
We analyze available heat flow data from the flanks of the Southeast Indian Ridge adjacent to or within the Australian-Antarctic Discordance (AAD), an area with patchy sediment cover and highly fractured seafloor as dissected by ridge- and fracture-parallel faults. The data set includes 23 new data points collected along a 14-Ma old isochron and 19 existing measurements from the 20- to 24-Ma old crust. Most sites of measurements exhibit low heat flux (from 2 to 50 mW m(-2)) with near-linear temperature-depth profiles except at a few sites, where recent bottom water temperature change may have caused nonlinearity toward the sediment surface. Because the igneous basement is expected to outcrop a short distance away from any measurement site, we hypothesize that horizontally channelized water circulation within the uppermost crust is the primary process for the widespread low heat flow values. The process may be further influenced by vertical fluid flow along numerous fault zones that crisscross the AAD seafloor. Systematic measurements along and across the fault zones of interest as well as seismic profiling for sediment distribution are required to confirm this possible, suspected effect.
Resumo:
Interaction of ocean waves, currents and sea bed roughness is a complicated phenomena in fluid dynamic. This paper will describe the governing equations of motions of this phenomena in viscous and nonviscous conditions as well as study and analysis the experimental results of sets of physical models on waves, currents and artificial roughness, and consists of three parts: First, by establishing some typical patterns of roughness, the effects of sea bed roughness on a uniform current has been studied, as well as the manning coefficient of each type is reviewed to find the critical situation due to different arrangement. Second, the effect of roughness on wave parameters changes, such as wave height, wave length, and wave dispersion equations have been studied, third, superimposing, the waves + current + roughness patterns established in a flume, equipped with waves + currents generator, in this stage different analysis has been done to find the governing dimensionless numbers, and present the numbers to define the contortions and formulations of this phenomena. First step of the model is verified by the so called Chinese method, and the Second step by the Kamphius (1975), and third step by the van Rijn (1990) , and Brevik and Ass ( 1980), and in all cases reasonable agreements have been obtained. Finally new dimensionless parameters presented for this complicated phenomena.
Resumo:
Biochemical agents, including bacteria and toxins, are potentially dangerous and responsible for a wide variety of diseases. Reliable detection and characterization of small samples is necessary in order to reduce and eliminate their harmful consequences. Microcantilever sensors offer a potential alternative to the state of the art due to their small size, fast response time, and the ability to operate in air and liquid environments. At present, there are several technology limitations that inhibit application of microcantilever to biochemical detection and analysis, including difficulties in conducting temperature-sensitive experiments, material inadequacy resulting in insufficient cell capture, and poor selectivity of multiple analytes. This work aims to address several of these issues by introducing microcantilevers having integrated thermal functionality and by introducing nanocrystalline diamond as new material for microcantilevers. Microcantilevers are designed, fabricated, characterized, and used for capture and detection of cells and bacteria. The first microcantilever type described in this work is a silicon cantilever having highly uniform in-plane temperature distribution. The goal is to have 100 μm square uniformly heated area that can be used for thermal characterization of films as well as to conduct chemical reactions with small amounts of material. Fabricated cantilevers can reach above 300C while maintaining temperature uniformity of 2−4%. This is an improvement of over one order of magnitude over currently available cantilevers. The second microcantilever type is a doped single crystal silicon cantilever having a thin coating of ultrananocrystalline diamond (UNCD). The primary application of such a device is in biological testing, where diamond acts as a stable, electrically isolated reaction surface while silicon layer provides controlled heating with minimum variations in temperature. This work shows that composite cantilevers of this kind are an effective platform for temperature-sensitive biological experiments, such as heat lysing and polymerase chain reaction. The rapid heat-transfer of Si-UNCD cantilever compromised the membrane of NIH 3T3 fibroblast and lysed the cell nucleus within 30 seconds. Bacteria cells, Listeria monocytogenes V7, were shown to be captured with biotinylated heat-shock protein on UNCD surface and 90% of all viable cells exhibit membrane porosity due to high heat in 15 seconds. Lastly, a sensor made solely from UNCD diamond is fabricated with the intention of being used to detect the presence of biological species by means of an integrated piezoresistor or through frequency change monitoring. Since UNCD diamond has not been previously used in piezoresistive applications, temperature-denpendent piezoresistive coefficients and gage factors are determined first. The doped UNCD exhibits a significant piezoresistive effect with gauge factor of 7.53±0.32 and a piezoresistive coefficient of 8.12×10^−12 Pa^−1 at room temperature. The piezoresistive properties of UNCD are constant over the temperature range of 25−200C. 300 μm long cantilevers have the highest sensitivity of 0.186 m-Ohm/Ohm per μm of cantilever end deflection, which is approximately half that of similarly sized silicon cantilevers. UNCD cantilever arrays were fabricated consisting of four sixteen-cantilever arrays of length 20–90 μm in addition to an eight-cantilever array of length 120 μm. Laser doppler vibrometry (LDV) measured the cantilever resonant frequency, which ranged as 218 kHz−5.14 MHz in air and 73 kHz−3.68 MHz in water. The quality factor of the cantilever was 47−151 in air and 18−45 in water. The ability to measure frequencies of the cantilever arrays opens the possibility for detection of individual bacteria by monitoring frequency shift after cell capture.